Mar 20 21:24:03.917087 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 21:24:03.917122 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 19:37:53 -00 2025 Mar 20 21:24:03.917132 kernel: KASLR enabled Mar 20 21:24:03.917139 kernel: efi: EFI v2.7 by EDK II Mar 20 21:24:03.917145 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40498 Mar 20 21:24:03.917150 kernel: random: crng init done Mar 20 21:24:03.917158 kernel: secureboot: Secure boot disabled Mar 20 21:24:03.917163 kernel: ACPI: Early table checksum verification disabled Mar 20 21:24:03.917170 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 21:24:03.917177 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 21:24:03.917183 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917189 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917195 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917201 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917208 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917216 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917222 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917228 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917235 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 21:24:03.917241 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 21:24:03.917248 kernel: NUMA: Failed to initialise from firmware Mar 20 21:24:03.917254 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:24:03.917261 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 20 21:24:03.917267 kernel: Zone ranges: Mar 20 21:24:03.917273 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:24:03.917281 kernel: DMA32 empty Mar 20 21:24:03.917287 kernel: Normal empty Mar 20 21:24:03.917293 kernel: Movable zone start for each node Mar 20 21:24:03.917300 kernel: Early memory node ranges Mar 20 21:24:03.917306 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 21:24:03.917313 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 21:24:03.917333 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 21:24:03.917339 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 21:24:03.917346 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 21:24:03.917352 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 21:24:03.917358 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 21:24:03.917365 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 21:24:03.917372 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 21:24:03.917379 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 21:24:03.917385 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 21:24:03.917394 kernel: psci: probing for conduit method from ACPI. Mar 20 21:24:03.917400 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 21:24:03.917407 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 21:24:03.917415 kernel: psci: Trusted OS migration not required Mar 20 21:24:03.917421 kernel: psci: SMC Calling Convention v1.1 Mar 20 21:24:03.917428 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 21:24:03.917435 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 21:24:03.917441 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 21:24:03.917448 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 21:24:03.917455 kernel: Detected PIPT I-cache on CPU0 Mar 20 21:24:03.917461 kernel: CPU features: detected: GIC system register CPU interface Mar 20 21:24:03.917468 kernel: CPU features: detected: Hardware dirty bit management Mar 20 21:24:03.917475 kernel: CPU features: detected: Spectre-v4 Mar 20 21:24:03.917482 kernel: CPU features: detected: Spectre-BHB Mar 20 21:24:03.917489 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 21:24:03.917496 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 21:24:03.917502 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 21:24:03.917509 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 21:24:03.917515 kernel: alternatives: applying boot alternatives Mar 20 21:24:03.917523 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:24:03.917530 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 21:24:03.917537 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 21:24:03.917543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 21:24:03.917550 kernel: Fallback order for Node 0: 0 Mar 20 21:24:03.917558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 21:24:03.917564 kernel: Policy zone: DMA Mar 20 21:24:03.917571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 21:24:03.917577 kernel: software IO TLB: area num 4. Mar 20 21:24:03.917584 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 21:24:03.917591 kernel: Memory: 2387412K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184876K reserved, 0K cma-reserved) Mar 20 21:24:03.917598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 21:24:03.917604 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 21:24:03.917611 kernel: rcu: RCU event tracing is enabled. Mar 20 21:24:03.917618 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 21:24:03.917625 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 21:24:03.917632 kernel: Tracing variant of Tasks RCU enabled. Mar 20 21:24:03.917640 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 21:24:03.917647 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 21:24:03.917654 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 21:24:03.917660 kernel: GICv3: 256 SPIs implemented Mar 20 21:24:03.917674 kernel: GICv3: 0 Extended SPIs implemented Mar 20 21:24:03.917682 kernel: Root IRQ handler: gic_handle_irq Mar 20 21:24:03.917689 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 21:24:03.917695 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 21:24:03.917702 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 21:24:03.917708 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 21:24:03.917715 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 21:24:03.917723 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 21:24:03.917730 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 21:24:03.917737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 21:24:03.917744 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:24:03.917750 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 21:24:03.917757 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 21:24:03.917764 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 21:24:03.917770 kernel: arm-pv: using stolen time PV Mar 20 21:24:03.917777 kernel: Console: colour dummy device 80x25 Mar 20 21:24:03.917784 kernel: ACPI: Core revision 20230628 Mar 20 21:24:03.917791 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 21:24:03.917800 kernel: pid_max: default: 32768 minimum: 301 Mar 20 21:24:03.917806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 21:24:03.917813 kernel: landlock: Up and running. Mar 20 21:24:03.917820 kernel: SELinux: Initializing. Mar 20 21:24:03.917827 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:24:03.917834 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 21:24:03.917841 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:24:03.917848 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 21:24:03.917855 kernel: rcu: Hierarchical SRCU implementation. Mar 20 21:24:03.917869 kernel: rcu: Max phase no-delay instances is 400. Mar 20 21:24:03.917876 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 21:24:03.917882 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 21:24:03.917889 kernel: Remapping and enabling EFI services. Mar 20 21:24:03.917896 kernel: smp: Bringing up secondary CPUs ... Mar 20 21:24:03.917902 kernel: Detected PIPT I-cache on CPU1 Mar 20 21:24:03.917909 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 21:24:03.917917 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 21:24:03.917923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:24:03.917931 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 21:24:03.917938 kernel: Detected PIPT I-cache on CPU2 Mar 20 21:24:03.917950 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 21:24:03.917958 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 21:24:03.917971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:24:03.917978 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 21:24:03.917986 kernel: Detected PIPT I-cache on CPU3 Mar 20 21:24:03.917993 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 21:24:03.918000 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 21:24:03.918009 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 21:24:03.918016 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 21:24:03.918023 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 21:24:03.918030 kernel: SMP: Total of 4 processors activated. Mar 20 21:24:03.918038 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 21:24:03.918045 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 21:24:03.918052 kernel: CPU features: detected: Common not Private translations Mar 20 21:24:03.918059 kernel: CPU features: detected: CRC32 instructions Mar 20 21:24:03.918072 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 21:24:03.918080 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 21:24:03.918087 kernel: CPU features: detected: LSE atomic instructions Mar 20 21:24:03.918094 kernel: CPU features: detected: Privileged Access Never Mar 20 21:24:03.918101 kernel: CPU features: detected: RAS Extension Support Mar 20 21:24:03.918108 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 21:24:03.918115 kernel: CPU: All CPU(s) started at EL1 Mar 20 21:24:03.918123 kernel: alternatives: applying system-wide alternatives Mar 20 21:24:03.918130 kernel: devtmpfs: initialized Mar 20 21:24:03.918137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 21:24:03.918146 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 21:24:03.918153 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 21:24:03.918160 kernel: SMBIOS 3.0.0 present. Mar 20 21:24:03.918167 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 21:24:03.918174 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 21:24:03.918182 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 21:24:03.918189 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 21:24:03.918196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 21:24:03.918205 kernel: audit: initializing netlink subsys (disabled) Mar 20 21:24:03.918212 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 20 21:24:03.918220 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 21:24:03.918227 kernel: cpuidle: using governor menu Mar 20 21:24:03.918234 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 21:24:03.918241 kernel: ASID allocator initialised with 32768 entries Mar 20 21:24:03.918248 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 21:24:03.918255 kernel: Serial: AMBA PL011 UART driver Mar 20 21:24:03.918263 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 21:24:03.918270 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 21:24:03.918278 kernel: Modules: 509248 pages in range for PLT usage Mar 20 21:24:03.918285 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 21:24:03.918293 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 21:24:03.918300 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 21:24:03.918307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 21:24:03.918314 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 21:24:03.918321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 21:24:03.918343 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 21:24:03.918350 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 21:24:03.918358 kernel: ACPI: Added _OSI(Module Device) Mar 20 21:24:03.918365 kernel: ACPI: Added _OSI(Processor Device) Mar 20 21:24:03.918372 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 21:24:03.918379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 21:24:03.918387 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 21:24:03.918394 kernel: ACPI: Interpreter enabled Mar 20 21:24:03.918401 kernel: ACPI: Using GIC for interrupt routing Mar 20 21:24:03.918408 kernel: ACPI: MCFG table detected, 1 entries Mar 20 21:24:03.918416 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 21:24:03.918424 kernel: printk: console [ttyAMA0] enabled Mar 20 21:24:03.918432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 21:24:03.918560 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 21:24:03.918639 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 21:24:03.918721 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 21:24:03.918791 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 21:24:03.918858 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 21:24:03.918869 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 21:24:03.918877 kernel: PCI host bridge to bus 0000:00 Mar 20 21:24:03.918948 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 21:24:03.919011 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 21:24:03.919078 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 21:24:03.919140 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 21:24:03.919222 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 21:24:03.919303 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 21:24:03.919372 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 21:24:03.919440 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 21:24:03.919508 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:24:03.919575 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 21:24:03.919643 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 21:24:03.919724 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 21:24:03.919790 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 21:24:03.919850 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 21:24:03.919911 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 21:24:03.919920 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 21:24:03.919928 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 21:24:03.919935 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 21:24:03.919942 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 21:24:03.919951 kernel: iommu: Default domain type: Translated Mar 20 21:24:03.919958 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 21:24:03.919965 kernel: efivars: Registered efivars operations Mar 20 21:24:03.919972 kernel: vgaarb: loaded Mar 20 21:24:03.919979 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 21:24:03.919987 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 21:24:03.919994 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 21:24:03.920001 kernel: pnp: PnP ACPI init Mar 20 21:24:03.920086 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 21:24:03.920098 kernel: pnp: PnP ACPI: found 1 devices Mar 20 21:24:03.920105 kernel: NET: Registered PF_INET protocol family Mar 20 21:24:03.920112 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 21:24:03.920120 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 21:24:03.920127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 21:24:03.920134 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 21:24:03.920142 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 21:24:03.920149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 21:24:03.920156 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:24:03.920165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 21:24:03.920172 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 21:24:03.920179 kernel: PCI: CLS 0 bytes, default 64 Mar 20 21:24:03.920186 kernel: kvm [1]: HYP mode not available Mar 20 21:24:03.920193 kernel: Initialise system trusted keyrings Mar 20 21:24:03.920201 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 21:24:03.920208 kernel: Key type asymmetric registered Mar 20 21:24:03.920215 kernel: Asymmetric key parser 'x509' registered Mar 20 21:24:03.920222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 21:24:03.920230 kernel: io scheduler mq-deadline registered Mar 20 21:24:03.920238 kernel: io scheduler kyber registered Mar 20 21:24:03.920245 kernel: io scheduler bfq registered Mar 20 21:24:03.920252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 21:24:03.920259 kernel: ACPI: button: Power Button [PWRB] Mar 20 21:24:03.920267 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 21:24:03.920336 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 21:24:03.920345 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 21:24:03.920352 kernel: thunder_xcv, ver 1.0 Mar 20 21:24:03.920361 kernel: thunder_bgx, ver 1.0 Mar 20 21:24:03.920368 kernel: nicpf, ver 1.0 Mar 20 21:24:03.920375 kernel: nicvf, ver 1.0 Mar 20 21:24:03.920451 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 21:24:03.920517 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T21:24:03 UTC (1742505843) Mar 20 21:24:03.920526 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 21:24:03.920533 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 21:24:03.920541 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 21:24:03.920550 kernel: watchdog: Hard watchdog permanently disabled Mar 20 21:24:03.920557 kernel: NET: Registered PF_INET6 protocol family Mar 20 21:24:03.920564 kernel: Segment Routing with IPv6 Mar 20 21:24:03.920571 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 21:24:03.920578 kernel: NET: Registered PF_PACKET protocol family Mar 20 21:24:03.920585 kernel: Key type dns_resolver registered Mar 20 21:24:03.920592 kernel: registered taskstats version 1 Mar 20 21:24:03.920599 kernel: Loading compiled-in X.509 certificates Mar 20 21:24:03.920607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 3a6f52a6c751e8bbe3389ae978b265effe8f77af' Mar 20 21:24:03.920615 kernel: Key type .fscrypt registered Mar 20 21:24:03.920622 kernel: Key type fscrypt-provisioning registered Mar 20 21:24:03.920629 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 21:24:03.920637 kernel: ima: Allocated hash algorithm: sha1 Mar 20 21:24:03.920644 kernel: ima: No architecture policies found Mar 20 21:24:03.920651 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 21:24:03.920658 kernel: clk: Disabling unused clocks Mar 20 21:24:03.920665 kernel: Freeing unused kernel memory: 38464K Mar 20 21:24:03.920683 kernel: Run /init as init process Mar 20 21:24:03.920696 kernel: with arguments: Mar 20 21:24:03.920704 kernel: /init Mar 20 21:24:03.920711 kernel: with environment: Mar 20 21:24:03.920718 kernel: HOME=/ Mar 20 21:24:03.920725 kernel: TERM=linux Mar 20 21:24:03.920732 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 21:24:03.920740 systemd[1]: Successfully made /usr/ read-only. Mar 20 21:24:03.920750 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:24:03.920760 systemd[1]: Detected virtualization kvm. Mar 20 21:24:03.920767 systemd[1]: Detected architecture arm64. Mar 20 21:24:03.920774 systemd[1]: Running in initrd. Mar 20 21:24:03.920782 systemd[1]: No hostname configured, using default hostname. Mar 20 21:24:03.920789 systemd[1]: Hostname set to <localhost>. Mar 20 21:24:03.920797 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:24:03.920804 systemd[1]: Queued start job for default target initrd.target. Mar 20 21:24:03.920811 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:24:03.920820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:24:03.920828 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 21:24:03.920836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:24:03.920843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 21:24:03.920852 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 21:24:03.920860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 21:24:03.920869 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 21:24:03.920877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:24:03.920884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:24:03.920892 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:24:03.920899 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:24:03.920906 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:24:03.920914 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:24:03.920921 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:24:03.920929 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:24:03.920938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 21:24:03.920945 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 21:24:03.920953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:24:03.920961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:24:03.920968 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:24:03.920975 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:24:03.920983 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 21:24:03.920990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:24:03.920999 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 21:24:03.921007 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 21:24:03.921014 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:24:03.921021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:24:03.921029 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:24:03.921036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:24:03.921044 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 21:24:03.921053 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 21:24:03.921061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:24:03.921075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:24:03.921100 systemd-journald[237]: Collecting audit messages is disabled. Mar 20 21:24:03.921120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 21:24:03.921128 kernel: Bridge firewalling registered Mar 20 21:24:03.921135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:24:03.921143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:24:03.921151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:24:03.921159 systemd-journald[237]: Journal started Mar 20 21:24:03.921178 systemd-journald[237]: Runtime Journal (/run/log/journal/01b5ea0602e943e7bbe89e2724550228) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:24:03.897836 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 21:24:03.914216 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 21:24:03.925989 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:24:03.928648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:24:03.930615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:24:03.933793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:24:03.940073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:24:03.942476 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 21:24:03.947645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:24:03.948737 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:24:03.950666 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:24:03.956821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:24:03.961102 dracut-cmdline[270]: dracut-dracut-053 Mar 20 21:24:03.963546 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0beb08f475de014f6ab4e06127ed84e918521fd470084f537ae9409b262d0ed3 Mar 20 21:24:03.996547 systemd-resolved[283]: Positive Trust Anchors: Mar 20 21:24:03.996565 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:24:03.996595 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:24:04.001320 systemd-resolved[283]: Defaulting to hostname 'linux'. Mar 20 21:24:04.002290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:24:04.006355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:24:04.038696 kernel: SCSI subsystem initialized Mar 20 21:24:04.042687 kernel: Loading iSCSI transport class v2.0-870. Mar 20 21:24:04.049699 kernel: iscsi: registered transport (tcp) Mar 20 21:24:04.062690 kernel: iscsi: registered transport (qla4xxx) Mar 20 21:24:04.062706 kernel: QLogic iSCSI HBA Driver Mar 20 21:24:04.106738 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 21:24:04.108623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 21:24:04.142533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 21:24:04.142612 kernel: device-mapper: uevent: version 1.0.3 Mar 20 21:24:04.142657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 21:24:04.189715 kernel: raid6: neonx8 gen() 15795 MB/s Mar 20 21:24:04.206698 kernel: raid6: neonx4 gen() 15804 MB/s Mar 20 21:24:04.223693 kernel: raid6: neonx2 gen() 13186 MB/s Mar 20 21:24:04.240695 kernel: raid6: neonx1 gen() 10513 MB/s Mar 20 21:24:04.257692 kernel: raid6: int64x8 gen() 6786 MB/s Mar 20 21:24:04.274693 kernel: raid6: int64x4 gen() 7344 MB/s Mar 20 21:24:04.291693 kernel: raid6: int64x2 gen() 6108 MB/s Mar 20 21:24:04.308813 kernel: raid6: int64x1 gen() 5049 MB/s Mar 20 21:24:04.308827 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s Mar 20 21:24:04.326902 kernel: raid6: .... xor() 12407 MB/s, rmw enabled Mar 20 21:24:04.326916 kernel: raid6: using neon recovery algorithm Mar 20 21:24:04.331697 kernel: xor: measuring software checksum speed Mar 20 21:24:04.333094 kernel: 8regs : 18461 MB/sec Mar 20 21:24:04.333108 kernel: 32regs : 21636 MB/sec Mar 20 21:24:04.333765 kernel: arm64_neon : 27775 MB/sec Mar 20 21:24:04.333785 kernel: xor: using function: arm64_neon (27775 MB/sec) Mar 20 21:24:04.385702 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 21:24:04.396118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:24:04.398684 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:24:04.430839 systemd-udevd[459]: Using default interface naming scheme 'v255'. Mar 20 21:24:04.434603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:24:04.437690 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 21:24:04.461650 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Mar 20 21:24:04.486894 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:24:04.489188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:24:04.541625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:24:04.544632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 21:24:04.563718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 21:24:04.565177 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:24:04.566904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:24:04.569155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:24:04.572483 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 21:24:04.594507 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 21:24:04.617334 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 21:24:04.617444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 21:24:04.617456 kernel: GPT:9289727 != 19775487 Mar 20 21:24:04.617465 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 21:24:04.617474 kernel: GPT:9289727 != 19775487 Mar 20 21:24:04.617483 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 21:24:04.617492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:24:04.599809 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:24:04.617600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:24:04.617725 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:24:04.620637 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:24:04.621734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:24:04.621869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:24:04.625121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:24:04.626892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:24:04.643558 kernel: BTRFS: device fsid 892d57a1-84f1-442c-90df-b8383db1b8c3 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (516) Mar 20 21:24:04.646699 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (524) Mar 20 21:24:04.648961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 21:24:04.650318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:24:04.658926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 21:24:04.673765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 21:24:04.674875 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 21:24:04.683314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:24:04.685315 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 21:24:04.687212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 21:24:04.709029 disk-uuid[553]: Primary Header is updated. Mar 20 21:24:04.709029 disk-uuid[553]: Secondary Entries is updated. Mar 20 21:24:04.709029 disk-uuid[553]: Secondary Header is updated. Mar 20 21:24:04.713687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:24:04.723484 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:24:05.722687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 21:24:05.723582 disk-uuid[554]: The operation has completed successfully. Mar 20 21:24:05.749328 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 21:24:05.749423 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 21:24:05.773737 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 21:24:05.789441 sh[573]: Success Mar 20 21:24:05.807710 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 21:24:05.834101 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 21:24:05.836808 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 21:24:05.849770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 21:24:05.858725 kernel: BTRFS info (device dm-0): first mount of filesystem 892d57a1-84f1-442c-90df-b8383db1b8c3 Mar 20 21:24:05.858769 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:24:05.858780 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 21:24:05.859902 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 21:24:05.861679 kernel: BTRFS info (device dm-0): using free space tree Mar 20 21:24:05.864657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 21:24:05.865992 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 21:24:05.866717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 21:24:05.869490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 21:24:05.897533 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:24:05.897577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:24:05.897587 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:24:05.900697 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:24:05.905708 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:24:05.907901 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 21:24:05.910813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 21:24:05.981390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:24:05.984575 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:24:06.018585 ignition[661]: Ignition 2.20.0 Mar 20 21:24:06.018594 ignition[661]: Stage: fetch-offline Mar 20 21:24:06.018625 ignition[661]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:06.018636 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:06.018816 ignition[661]: parsed url from cmdline: "" Mar 20 21:24:06.018819 ignition[661]: no config URL provided Mar 20 21:24:06.018823 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 21:24:06.018830 ignition[661]: no config at "/usr/lib/ignition/user.ign" Mar 20 21:24:06.018854 ignition[661]: op(1): [started] loading QEMU firmware config module Mar 20 21:24:06.018859 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 21:24:06.028539 ignition[661]: op(1): [finished] loading QEMU firmware config module Mar 20 21:24:06.030028 systemd-networkd[762]: lo: Link UP Mar 20 21:24:06.030046 systemd-networkd[762]: lo: Gained carrier Mar 20 21:24:06.030866 systemd-networkd[762]: Enumeration completed Mar 20 21:24:06.030981 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:24:06.031254 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:24:06.031257 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:24:06.031949 systemd-networkd[762]: eth0: Link UP Mar 20 21:24:06.031952 systemd-networkd[762]: eth0: Gained carrier Mar 20 21:24:06.031958 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:24:06.033144 systemd[1]: Reached target network.target - Network. Mar 20 21:24:06.046707 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:24:06.076479 ignition[661]: parsing config with SHA512: 75c4bb11f05124f949af86c751d2e5c1eceb146916f79105a3b207c2614a277e563e56527c4d81c4609bbf8e42fc83a61d93a2a2a3aa15bb4962f7be5aab71fa Mar 20 21:24:06.082790 unknown[661]: fetched base config from "system" Mar 20 21:24:06.082798 unknown[661]: fetched user config from "qemu" Mar 20 21:24:06.083323 ignition[661]: fetch-offline: fetch-offline passed Mar 20 21:24:06.085069 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:24:06.083398 ignition[661]: Ignition finished successfully Mar 20 21:24:06.086438 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 21:24:06.087156 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 21:24:06.109431 ignition[770]: Ignition 2.20.0 Mar 20 21:24:06.109442 ignition[770]: Stage: kargs Mar 20 21:24:06.109585 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:06.109595 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:06.112760 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 21:24:06.110470 ignition[770]: kargs: kargs passed Mar 20 21:24:06.115788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 21:24:06.110509 ignition[770]: Ignition finished successfully Mar 20 21:24:06.138836 ignition[779]: Ignition 2.20.0 Mar 20 21:24:06.138845 ignition[779]: Stage: disks Mar 20 21:24:06.138977 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:06.138986 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:06.139844 ignition[779]: disks: disks passed Mar 20 21:24:06.141694 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 21:24:06.139887 ignition[779]: Ignition finished successfully Mar 20 21:24:06.143104 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 21:24:06.144467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 21:24:06.146327 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:24:06.147823 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:24:06.149585 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:24:06.152185 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 21:24:06.176006 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 21:24:06.179683 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 21:24:06.181752 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 21:24:06.235696 kernel: EXT4-fs (vda9): mounted filesystem 78c526d9-91af-4481-a769-6d3064caa829 r/w with ordered data mode. Quota mode: none. Mar 20 21:24:06.236063 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 21:24:06.237244 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 21:24:06.239489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:24:06.241040 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 21:24:06.242041 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 21:24:06.242079 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 21:24:06.242100 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:24:06.252005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 21:24:06.254285 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 21:24:06.260009 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (798) Mar 20 21:24:06.260033 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:24:06.260043 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:24:06.260052 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:24:06.264690 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:24:06.264931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:24:06.299828 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 21:24:06.303813 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Mar 20 21:24:06.307421 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 21:24:06.311638 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 21:24:06.380646 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 21:24:06.382751 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 21:24:06.384266 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 21:24:06.401718 kernel: BTRFS info (device vda6): last unmount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:24:06.412908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 21:24:06.422486 ignition[914]: INFO : Ignition 2.20.0 Mar 20 21:24:06.422486 ignition[914]: INFO : Stage: mount Mar 20 21:24:06.424009 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:06.424009 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:06.424009 ignition[914]: INFO : mount: mount passed Mar 20 21:24:06.424009 ignition[914]: INFO : Ignition finished successfully Mar 20 21:24:06.427526 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 21:24:06.430268 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 21:24:07.006827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 21:24:07.008353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 21:24:07.027573 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (928) Mar 20 21:24:07.027615 kernel: BTRFS info (device vda6): first mount of filesystem d2d05864-61d3-424d-8bc5-6b85db5f6d34 Mar 20 21:24:07.027626 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 21:24:07.029237 kernel: BTRFS info (device vda6): using free space tree Mar 20 21:24:07.031706 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 21:24:07.032371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 21:24:07.052687 ignition[945]: INFO : Ignition 2.20.0 Mar 20 21:24:07.052687 ignition[945]: INFO : Stage: files Mar 20 21:24:07.054381 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:07.054381 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:07.054381 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 20 21:24:07.057887 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 21:24:07.057887 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 21:24:07.057887 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 21:24:07.057887 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 21:24:07.057887 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 21:24:07.057887 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:24:07.057887 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 20 21:24:07.056662 unknown[945]: wrote ssh authorized keys file for user: core Mar 20 21:24:07.159914 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 21:24:07.301199 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 21:24:07.301199 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:24:07.304967 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 20 21:24:07.629470 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 21:24:07.693350 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 20 21:24:07.695281 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 20 21:24:07.944131 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 21:24:08.080794 systemd-networkd[762]: eth0: Gained IPv6LL Mar 20 21:24:08.216064 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 20 21:24:08.216064 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 21:24:08.220029 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 21:24:08.233793 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:24:08.235897 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 21:24:08.237575 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 21:24:08.237575 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 21:24:08.237575 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 21:24:08.237575 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:24:08.237575 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 21:24:08.237575 ignition[945]: INFO : files: files passed Mar 20 21:24:08.237575 ignition[945]: INFO : Ignition finished successfully Mar 20 21:24:08.238947 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 21:24:08.243830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 21:24:08.261158 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 21:24:08.263313 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 21:24:08.263423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 21:24:08.267988 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 21:24:08.270645 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:24:08.270645 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:24:08.274400 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 21:24:08.274255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:24:08.276029 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 21:24:08.278898 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 21:24:08.312032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 21:24:08.312138 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 21:24:08.314378 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 21:24:08.316388 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 21:24:08.318383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 21:24:08.319098 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 21:24:08.348706 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:24:08.351025 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 21:24:08.371123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:24:08.372329 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:24:08.374413 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 21:24:08.376236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 21:24:08.376349 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 21:24:08.378977 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 21:24:08.381112 systemd[1]: Stopped target basic.target - Basic System. Mar 20 21:24:08.382809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 21:24:08.384512 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 21:24:08.386464 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 21:24:08.388365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 21:24:08.390174 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 21:24:08.392099 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 21:24:08.393982 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 21:24:08.395703 systemd[1]: Stopped target swap.target - Swaps. Mar 20 21:24:08.397206 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 21:24:08.397314 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 21:24:08.399566 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:24:08.400694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:24:08.402592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 21:24:08.406721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:24:08.407909 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 21:24:08.408029 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 21:24:08.410830 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 21:24:08.410952 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 21:24:08.413011 systemd[1]: Stopped target paths.target - Path Units. Mar 20 21:24:08.414529 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 21:24:08.418764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:24:08.419901 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 21:24:08.421939 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 21:24:08.423494 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 21:24:08.423576 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 21:24:08.425095 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 21:24:08.425172 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 21:24:08.426687 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 21:24:08.426794 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 21:24:08.428517 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 21:24:08.428616 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 21:24:08.430865 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 21:24:08.432372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 21:24:08.433535 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 21:24:08.433652 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:24:08.435886 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 21:24:08.435986 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 21:24:08.443884 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 21:24:08.443961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 21:24:08.452338 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 21:24:08.453772 ignition[1002]: INFO : Ignition 2.20.0 Mar 20 21:24:08.453772 ignition[1002]: INFO : Stage: umount Mar 20 21:24:08.456169 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 21:24:08.456169 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 21:24:08.456169 ignition[1002]: INFO : umount: umount passed Mar 20 21:24:08.456169 ignition[1002]: INFO : Ignition finished successfully Mar 20 21:24:08.456781 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 21:24:08.456877 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 21:24:08.458402 systemd[1]: Stopped target network.target - Network. Mar 20 21:24:08.460248 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 21:24:08.460316 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 21:24:08.461956 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 21:24:08.462016 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 21:24:08.463781 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 21:24:08.463828 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 21:24:08.465770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 21:24:08.465812 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 21:24:08.468663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 21:24:08.470426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 21:24:08.474767 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 21:24:08.474872 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 21:24:08.478928 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 21:24:08.479162 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 21:24:08.479247 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 21:24:08.482140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 21:24:08.482837 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 21:24:08.482881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:24:08.485512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 21:24:08.486659 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 21:24:08.486724 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 21:24:08.488973 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:24:08.489027 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:24:08.491831 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 21:24:08.491874 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 21:24:08.494027 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 21:24:08.494070 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:24:08.497211 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:24:08.500395 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:24:08.500449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:24:08.516580 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 21:24:08.516723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 21:24:08.518903 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 21:24:08.519027 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:24:08.521265 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 21:24:08.521328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 21:24:08.522948 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 21:24:08.522983 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:24:08.525415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 21:24:08.525461 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 21:24:08.528383 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 21:24:08.528429 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 21:24:08.531204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 21:24:08.531249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 21:24:08.534043 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 21:24:08.535323 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 21:24:08.535379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:24:08.538202 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 20 21:24:08.538246 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:24:08.540383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 21:24:08.540426 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:24:08.542660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 21:24:08.542715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:24:08.546595 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 21:24:08.546646 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 21:24:08.546923 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 21:24:08.547018 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 21:24:08.548350 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 21:24:08.548434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 21:24:08.550290 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 21:24:08.550394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 21:24:08.552918 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 21:24:08.555077 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 21:24:08.573076 systemd[1]: Switching root. Mar 20 21:24:08.605871 systemd-journald[237]: Journal stopped Mar 20 21:24:09.377473 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 20 21:24:09.377526 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 21:24:09.377537 kernel: SELinux: policy capability open_perms=1 Mar 20 21:24:09.377550 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 21:24:09.377562 kernel: SELinux: policy capability always_check_network=0 Mar 20 21:24:09.377571 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 21:24:09.377580 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 21:24:09.377589 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 21:24:09.377599 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 21:24:09.377608 kernel: audit: type=1403 audit(1742505848.763:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 21:24:09.377620 systemd[1]: Successfully loaded SELinux policy in 30.447ms. Mar 20 21:24:09.377641 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.583ms. Mar 20 21:24:09.377652 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 21:24:09.377664 systemd[1]: Detected virtualization kvm. Mar 20 21:24:09.377700 systemd[1]: Detected architecture arm64. Mar 20 21:24:09.377713 systemd[1]: Detected first boot. Mar 20 21:24:09.377723 systemd[1]: Initializing machine ID from VM UUID. Mar 20 21:24:09.377733 zram_generator::config[1049]: No configuration found. Mar 20 21:24:09.377744 kernel: NET: Registered PF_VSOCK protocol family Mar 20 21:24:09.377753 systemd[1]: Populated /etc with preset unit settings. Mar 20 21:24:09.377764 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 21:24:09.377776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 21:24:09.377786 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 21:24:09.377796 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 21:24:09.377806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 21:24:09.377817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 21:24:09.377828 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 21:24:09.377841 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 21:24:09.377851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 21:24:09.377867 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 21:24:09.377879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 21:24:09.377890 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 21:24:09.377901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 21:24:09.377911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 21:24:09.377921 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 21:24:09.377931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 21:24:09.377941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 21:24:09.377951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 21:24:09.377961 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 21:24:09.377979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 21:24:09.377992 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 21:24:09.378003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 21:24:09.378013 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 21:24:09.378023 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 21:24:09.378033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 21:24:09.378043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 21:24:09.378055 systemd[1]: Reached target slices.target - Slice Units. Mar 20 21:24:09.378066 systemd[1]: Reached target swap.target - Swaps. Mar 20 21:24:09.378076 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 21:24:09.378086 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 21:24:09.378095 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 21:24:09.378105 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 21:24:09.378116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 21:24:09.378127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 21:24:09.378137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 21:24:09.378146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 21:24:09.378158 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 21:24:09.378168 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 21:24:09.378178 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 21:24:09.378188 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 21:24:09.378198 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 21:24:09.378209 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 21:24:09.378219 systemd[1]: Reached target machines.target - Containers. Mar 20 21:24:09.378229 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 21:24:09.378240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:24:09.378250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 21:24:09.378260 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 21:24:09.378270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:24:09.378280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:24:09.378290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:24:09.378300 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 21:24:09.378309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:24:09.378320 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 21:24:09.378333 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 21:24:09.378343 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 21:24:09.378352 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 21:24:09.378362 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 21:24:09.378371 kernel: fuse: init (API version 7.39) Mar 20 21:24:09.378382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:24:09.378391 kernel: loop: module loaded Mar 20 21:24:09.378401 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 21:24:09.378412 kernel: ACPI: bus type drm_connector registered Mar 20 21:24:09.378421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 21:24:09.378444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 21:24:09.378454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 21:24:09.378463 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 21:24:09.378493 systemd-journald[1117]: Collecting audit messages is disabled. Mar 20 21:24:09.378520 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 21:24:09.378531 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 21:24:09.378542 systemd-journald[1117]: Journal started Mar 20 21:24:09.378562 systemd-journald[1117]: Runtime Journal (/run/log/journal/01b5ea0602e943e7bbe89e2724550228) is 5.9M, max 47.3M, 41.4M free. Mar 20 21:24:09.171767 systemd[1]: Queued start job for default target multi-user.target. Mar 20 21:24:09.181555 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 21:24:09.181947 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 21:24:09.380192 systemd[1]: Stopped verity-setup.service. Mar 20 21:24:09.387105 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 21:24:09.387130 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 21:24:09.388423 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 21:24:09.389728 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 21:24:09.390940 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 21:24:09.392232 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 21:24:09.393515 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 21:24:09.394826 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 21:24:09.396301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 21:24:09.397858 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 21:24:09.398032 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 21:24:09.399573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:24:09.399764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:24:09.401198 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:24:09.401351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:24:09.402663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:24:09.402886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:24:09.404249 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 21:24:09.404409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 21:24:09.405775 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:24:09.405934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:24:09.407382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 21:24:09.408982 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 21:24:09.410541 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 21:24:09.411993 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 21:24:09.424785 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 21:24:09.427314 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 21:24:09.429437 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 21:24:09.430641 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 21:24:09.430686 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 21:24:09.432587 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 21:24:09.439815 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 21:24:09.441855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 21:24:09.443156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:24:09.444654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 21:24:09.446583 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 21:24:09.447761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:24:09.448590 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 21:24:09.452769 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:24:09.454405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:24:09.456638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 21:24:09.460067 systemd-journald[1117]: Time spent on flushing to /var/log/journal/01b5ea0602e943e7bbe89e2724550228 is 23.170ms for 873 entries. Mar 20 21:24:09.460067 systemd-journald[1117]: System Journal (/var/log/journal/01b5ea0602e943e7bbe89e2724550228) is 8M, max 195.6M, 187.6M free. Mar 20 21:24:09.495884 systemd-journald[1117]: Received client request to flush runtime journal. Mar 20 21:24:09.498183 kernel: loop0: detected capacity change from 0 to 103832 Mar 20 21:24:09.462368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 21:24:09.467701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 21:24:09.469100 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 21:24:09.475536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 21:24:09.477359 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 21:24:09.478907 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 21:24:09.480375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:24:09.485173 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 21:24:09.490870 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 21:24:09.494554 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 21:24:09.502715 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 21:24:09.503176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 21:24:09.509820 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 20 21:24:09.509838 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 20 21:24:09.515301 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 20 21:24:09.516740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 21:24:09.521333 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 21:24:09.524343 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 21:24:09.528755 kernel: loop1: detected capacity change from 0 to 126448 Mar 20 21:24:09.552850 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 21:24:09.559482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 21:24:09.562691 kernel: loop2: detected capacity change from 0 to 194096 Mar 20 21:24:09.582845 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 20 21:24:09.582861 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 20 21:24:09.585690 kernel: loop3: detected capacity change from 0 to 103832 Mar 20 21:24:09.588604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 21:24:09.593687 kernel: loop4: detected capacity change from 0 to 126448 Mar 20 21:24:09.599687 kernel: loop5: detected capacity change from 0 to 194096 Mar 20 21:24:09.603778 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 21:24:09.604208 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 20 21:24:09.607640 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 21:24:09.607655 systemd[1]: Reloading... Mar 20 21:24:09.680719 zram_generator::config[1225]: No configuration found. Mar 20 21:24:09.733444 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 21:24:09.765490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:24:09.815198 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 21:24:09.815539 systemd[1]: Reloading finished in 207 ms. Mar 20 21:24:09.832333 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 21:24:09.833875 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 21:24:09.848976 systemd[1]: Starting ensure-sysext.service... Mar 20 21:24:09.850794 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 21:24:09.866336 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 20 21:24:09.866352 systemd[1]: Reloading... Mar 20 21:24:09.881728 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 21:24:09.882519 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 21:24:09.883412 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 21:24:09.883785 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 20 21:24:09.883996 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 20 21:24:09.886813 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:24:09.886911 systemd-tmpfiles[1260]: Skipping /boot Mar 20 21:24:09.898423 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 21:24:09.898533 systemd-tmpfiles[1260]: Skipping /boot Mar 20 21:24:09.909929 zram_generator::config[1286]: No configuration found. Mar 20 21:24:10.008119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:24:10.057409 systemd[1]: Reloading finished in 190 ms. Mar 20 21:24:10.068189 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 21:24:10.074274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 21:24:10.084610 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:24:10.086931 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 21:24:10.095440 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 21:24:10.098635 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 21:24:10.104848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 21:24:10.108015 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 21:24:10.116360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:24:10.123628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:24:10.126232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:24:10.131543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:24:10.132711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:24:10.132872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:24:10.134682 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 21:24:10.137755 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 21:24:10.140248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:24:10.141632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:24:10.143609 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:24:10.144162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:24:10.151117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:24:10.151281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:24:10.156166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:24:10.157601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:24:10.161444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:24:10.163871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:24:10.164061 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:24:10.164189 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:24:10.166797 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 21:24:10.180102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 21:24:10.180361 augenrules[1361]: No rules Mar 20 21:24:10.182255 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:24:10.182439 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:24:10.183830 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Mar 20 21:24:10.187258 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 21:24:10.189381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:24:10.189554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:24:10.191229 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 21:24:10.192860 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:24:10.193067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:24:10.203890 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 21:24:10.205273 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 21:24:10.218012 systemd[1]: Finished ensure-sysext.service. Mar 20 21:24:10.224559 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:24:10.225659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 21:24:10.228917 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 21:24:10.230843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 21:24:10.233968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 21:24:10.240948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 21:24:10.242224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 21:24:10.242266 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 21:24:10.244946 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 21:24:10.249453 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 21:24:10.251811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 21:24:10.252427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 21:24:10.252608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 21:24:10.254114 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 21:24:10.254284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 21:24:10.262038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 21:24:10.269148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1386) Mar 20 21:24:10.265049 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 21:24:10.265206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 21:24:10.267328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 21:24:10.267472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 21:24:10.269862 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 21:24:10.274578 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 21:24:10.282616 systemd-resolved[1329]: Positive Trust Anchors: Mar 20 21:24:10.282633 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 21:24:10.282686 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 21:24:10.289583 augenrules[1397]: /sbin/augenrules: No change Mar 20 21:24:10.290328 systemd-resolved[1329]: Defaulting to hostname 'linux'. Mar 20 21:24:10.295975 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 21:24:10.297388 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 21:24:10.306512 augenrules[1426]: No rules Mar 20 21:24:10.307940 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:24:10.308267 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:24:10.343036 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 21:24:10.344515 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 21:24:10.353477 systemd-networkd[1402]: lo: Link UP Mar 20 21:24:10.353493 systemd-networkd[1402]: lo: Gained carrier Mar 20 21:24:10.354836 systemd-networkd[1402]: Enumeration completed Mar 20 21:24:10.355338 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 21:24:10.357356 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 21:24:10.358993 systemd[1]: Reached target network.target - Network. Mar 20 21:24:10.360869 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:24:10.360880 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 21:24:10.362125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 21:24:10.365185 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:24:10.365214 systemd-networkd[1402]: eth0: Link UP Mar 20 21:24:10.365216 systemd-networkd[1402]: eth0: Gained carrier Mar 20 21:24:10.365225 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 21:24:10.366107 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 21:24:10.369909 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 21:24:10.377765 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 21:24:10.378775 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Mar 20 21:24:10.383187 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 21:24:10.383290 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2025-03-20 21:24:10.408656 UTC. Mar 20 21:24:10.393912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 21:24:10.395855 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 21:24:10.415968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 21:24:10.430076 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 21:24:10.433397 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 21:24:10.460263 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:24:10.460398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 21:24:10.489184 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 21:24:10.490783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 21:24:10.491903 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 21:24:10.493065 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 21:24:10.494319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 21:24:10.495734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 21:24:10.496858 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 21:24:10.498205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 21:24:10.499415 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 21:24:10.499451 systemd[1]: Reached target paths.target - Path Units. Mar 20 21:24:10.500364 systemd[1]: Reached target timers.target - Timer Units. Mar 20 21:24:10.502178 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 21:24:10.504573 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 21:24:10.507621 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 21:24:10.509025 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 21:24:10.510255 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 21:24:10.514431 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 21:24:10.515828 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 21:24:10.518020 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 21:24:10.519544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 21:24:10.520727 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 21:24:10.521632 systemd[1]: Reached target basic.target - Basic System. Mar 20 21:24:10.522610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:24:10.522640 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 21:24:10.523482 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 21:24:10.525247 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 21:24:10.527833 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 21:24:10.537586 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 21:24:10.539553 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 21:24:10.540560 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 21:24:10.541512 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 21:24:10.544684 jq[1461]: false Mar 20 21:24:10.543437 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 21:24:10.545272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 21:24:10.547903 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 21:24:10.554921 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 21:24:10.556767 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 21:24:10.557216 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 21:24:10.557892 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 21:24:10.559852 extend-filesystems[1462]: Found loop3 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found loop4 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found loop5 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda1 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda2 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda3 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found usr Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda4 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda6 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda7 Mar 20 21:24:10.559852 extend-filesystems[1462]: Found vda9 Mar 20 21:24:10.559852 extend-filesystems[1462]: Checking size of /dev/vda9 Mar 20 21:24:10.571735 dbus-daemon[1460]: [system] SELinux support is enabled Mar 20 21:24:10.561094 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 21:24:10.589086 extend-filesystems[1462]: Resized partition /dev/vda9 Mar 20 21:24:10.594533 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 21:24:10.563115 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 21:24:10.595021 extend-filesystems[1485]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 21:24:10.569204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 21:24:10.596370 jq[1474]: true Mar 20 21:24:10.569419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 21:24:10.574872 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 21:24:10.588270 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 21:24:10.588561 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 21:24:10.592273 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 21:24:10.592435 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 21:24:10.613294 update_engine[1471]: I20250320 21:24:10.611203 1471 main.cc:92] Flatcar Update Engine starting Mar 20 21:24:10.619890 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1396) Mar 20 21:24:10.619935 update_engine[1471]: I20250320 21:24:10.614640 1471 update_check_scheduler.cc:74] Next update check in 9m47s Mar 20 21:24:10.620333 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 21:24:10.623564 jq[1487]: true Mar 20 21:24:10.623727 tar[1481]: linux-arm64/helm Mar 20 21:24:10.627578 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 21:24:10.629040 systemd-logind[1468]: New seat seat0. Mar 20 21:24:10.630860 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 21:24:10.643207 systemd[1]: Started update-engine.service - Update Engine. Mar 20 21:24:10.645683 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 21:24:10.647225 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 21:24:10.647475 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 21:24:10.649218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 21:24:10.649596 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 21:24:10.653592 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 21:24:10.659221 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 21:24:10.659221 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 21:24:10.659221 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 21:24:10.668822 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Mar 20 21:24:10.668964 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 21:24:10.669166 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 21:24:10.683881 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Mar 20 21:24:10.685189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 21:24:10.687357 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 21:24:10.722078 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 21:24:10.842982 containerd[1488]: time="2025-03-20T21:24:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 21:24:10.844803 containerd[1488]: time="2025-03-20T21:24:10.843956160Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 21:24:10.853067 containerd[1488]: time="2025-03-20T21:24:10.853032080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.16µs" Mar 20 21:24:10.853067 containerd[1488]: time="2025-03-20T21:24:10.853063600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 21:24:10.853167 containerd[1488]: time="2025-03-20T21:24:10.853082120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 21:24:10.853242 containerd[1488]: time="2025-03-20T21:24:10.853215160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 21:24:10.853277 containerd[1488]: time="2025-03-20T21:24:10.853242600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 21:24:10.853277 containerd[1488]: time="2025-03-20T21:24:10.853267560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853332 containerd[1488]: time="2025-03-20T21:24:10.853315640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853356 containerd[1488]: time="2025-03-20T21:24:10.853333240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853611 containerd[1488]: time="2025-03-20T21:24:10.853589840Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853647 containerd[1488]: time="2025-03-20T21:24:10.853611560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853647 containerd[1488]: time="2025-03-20T21:24:10.853622920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853647 containerd[1488]: time="2025-03-20T21:24:10.853631200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853789 containerd[1488]: time="2025-03-20T21:24:10.853753440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 21:24:10.853980 containerd[1488]: time="2025-03-20T21:24:10.853957840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:24:10.854011 containerd[1488]: time="2025-03-20T21:24:10.853995960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 21:24:10.854011 containerd[1488]: time="2025-03-20T21:24:10.854008000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 21:24:10.854062 containerd[1488]: time="2025-03-20T21:24:10.854035600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 21:24:10.854321 containerd[1488]: time="2025-03-20T21:24:10.854275520Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 21:24:10.854361 containerd[1488]: time="2025-03-20T21:24:10.854341840Z" level=info msg="metadata content store policy set" policy=shared Mar 20 21:24:10.858171 containerd[1488]: time="2025-03-20T21:24:10.858138880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 21:24:10.858231 containerd[1488]: time="2025-03-20T21:24:10.858188000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 21:24:10.858231 containerd[1488]: time="2025-03-20T21:24:10.858202080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 21:24:10.858231 containerd[1488]: time="2025-03-20T21:24:10.858213400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 21:24:10.858231 containerd[1488]: time="2025-03-20T21:24:10.858224720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858234800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858246160Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858257840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858267640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858278080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858286840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 21:24:10.858294 containerd[1488]: time="2025-03-20T21:24:10.858297560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 21:24:10.858424 containerd[1488]: time="2025-03-20T21:24:10.858405600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 21:24:10.858446 containerd[1488]: time="2025-03-20T21:24:10.858424240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 21:24:10.858446 containerd[1488]: time="2025-03-20T21:24:10.858436560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 21:24:10.858478 containerd[1488]: time="2025-03-20T21:24:10.858446440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 21:24:10.858478 containerd[1488]: time="2025-03-20T21:24:10.858464440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 21:24:10.858478 containerd[1488]: time="2025-03-20T21:24:10.858474880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 21:24:10.858527 containerd[1488]: time="2025-03-20T21:24:10.858486680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 21:24:10.858527 containerd[1488]: time="2025-03-20T21:24:10.858496360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 21:24:10.858527 containerd[1488]: time="2025-03-20T21:24:10.858507520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 21:24:10.858527 containerd[1488]: time="2025-03-20T21:24:10.858524360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 21:24:10.858596 containerd[1488]: time="2025-03-20T21:24:10.858538120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 21:24:10.858896 containerd[1488]: time="2025-03-20T21:24:10.858854200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 21:24:10.858896 containerd[1488]: time="2025-03-20T21:24:10.858876240Z" level=info msg="Start snapshots syncer" Mar 20 21:24:10.858979 containerd[1488]: time="2025-03-20T21:24:10.858899640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 21:24:10.859253 containerd[1488]: time="2025-03-20T21:24:10.859216720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 21:24:10.859355 containerd[1488]: time="2025-03-20T21:24:10.859269000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 21:24:10.859355 containerd[1488]: time="2025-03-20T21:24:10.859346800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 21:24:10.859508 containerd[1488]: time="2025-03-20T21:24:10.859486520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 21:24:10.859542 containerd[1488]: time="2025-03-20T21:24:10.859517040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 21:24:10.859542 containerd[1488]: time="2025-03-20T21:24:10.859529960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 21:24:10.859542 containerd[1488]: time="2025-03-20T21:24:10.859539560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 21:24:10.859598 containerd[1488]: time="2025-03-20T21:24:10.859551120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 21:24:10.859598 containerd[1488]: time="2025-03-20T21:24:10.859561280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 21:24:10.859598 containerd[1488]: time="2025-03-20T21:24:10.859571520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 21:24:10.859598 containerd[1488]: time="2025-03-20T21:24:10.859594280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 21:24:10.859693 containerd[1488]: time="2025-03-20T21:24:10.859611240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 21:24:10.859693 containerd[1488]: time="2025-03-20T21:24:10.859621080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 21:24:10.859693 containerd[1488]: time="2025-03-20T21:24:10.859654520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:24:10.859756 containerd[1488]: time="2025-03-20T21:24:10.859722800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 21:24:10.859756 containerd[1488]: time="2025-03-20T21:24:10.859735600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:24:10.859756 containerd[1488]: time="2025-03-20T21:24:10.859745240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 21:24:10.859756 containerd[1488]: time="2025-03-20T21:24:10.859753560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 21:24:10.859825 containerd[1488]: time="2025-03-20T21:24:10.859763000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 21:24:10.859825 containerd[1488]: time="2025-03-20T21:24:10.859773000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 21:24:10.859892 containerd[1488]: time="2025-03-20T21:24:10.859849160Z" level=info msg="runtime interface created" Mar 20 21:24:10.859892 containerd[1488]: time="2025-03-20T21:24:10.859855280Z" level=info msg="created NRI interface" Mar 20 21:24:10.859892 containerd[1488]: time="2025-03-20T21:24:10.859869160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 21:24:10.859892 containerd[1488]: time="2025-03-20T21:24:10.859880520Z" level=info msg="Connect containerd service" Mar 20 21:24:10.859965 containerd[1488]: time="2025-03-20T21:24:10.859905720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 21:24:10.860632 containerd[1488]: time="2025-03-20T21:24:10.860605320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:24:10.962774 containerd[1488]: time="2025-03-20T21:24:10.962712880Z" level=info msg="Start subscribing containerd event" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.962989080Z" level=info msg="Start recovering state" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963095800Z" level=info msg="Start event monitor" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963111480Z" level=info msg="Start cni network conf syncer for default" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963122680Z" level=info msg="Start streaming server" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963132280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963138920Z" level=info msg="runtime interface starting up..." Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963144440Z" level=info msg="starting plugins..." Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963150000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963208240Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.963158480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 21:24:10.964969 containerd[1488]: time="2025-03-20T21:24:10.964560920Z" level=info msg="containerd successfully booted in 0.121969s" Mar 20 21:24:10.963433 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 21:24:10.993909 tar[1481]: linux-arm64/LICENSE Mar 20 21:24:10.994094 tar[1481]: linux-arm64/README.md Mar 20 21:24:11.007122 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 21:24:11.009876 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 21:24:11.025248 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 21:24:11.027792 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 21:24:11.041696 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 21:24:11.043707 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 21:24:11.046072 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 21:24:11.076283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 21:24:11.078840 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 21:24:11.080771 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 21:24:11.082016 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 21:24:11.408825 systemd-networkd[1402]: eth0: Gained IPv6LL Mar 20 21:24:11.411704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 21:24:11.413657 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 21:24:11.415884 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 21:24:11.418024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:11.427487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 21:24:11.442112 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 21:24:11.442354 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 21:24:11.443958 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 21:24:11.448046 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 21:24:11.907084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:11.908596 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 21:24:11.910587 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:24:11.912085 systemd[1]: Startup finished in 542ms (kernel) + 5.068s (initrd) + 3.180s (userspace) = 8.791s. Mar 20 21:24:12.372123 kubelet[1586]: E0320 21:24:12.372029 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:24:12.374636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:24:12.374799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:24:12.375243 systemd[1]: kubelet.service: Consumed 809ms CPU time, 242.8M memory peak. Mar 20 21:24:16.648941 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 21:24:16.650205 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:58564.service - OpenSSH per-connection server daemon (10.0.0.1:58564). Mar 20 21:24:16.730348 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 58564 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:16.732151 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:16.745364 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 21:24:16.746289 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 21:24:16.751465 systemd-logind[1468]: New session 1 of user core. Mar 20 21:24:16.769709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 21:24:16.772142 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 21:24:16.786648 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 21:24:16.788861 systemd-logind[1468]: New session c1 of user core. Mar 20 21:24:16.890099 systemd[1605]: Queued start job for default target default.target. Mar 20 21:24:16.901554 systemd[1605]: Created slice app.slice - User Application Slice. Mar 20 21:24:16.901582 systemd[1605]: Reached target paths.target - Paths. Mar 20 21:24:16.901618 systemd[1605]: Reached target timers.target - Timers. Mar 20 21:24:16.902799 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 21:24:16.911292 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 21:24:16.911349 systemd[1605]: Reached target sockets.target - Sockets. Mar 20 21:24:16.911383 systemd[1605]: Reached target basic.target - Basic System. Mar 20 21:24:16.911410 systemd[1605]: Reached target default.target - Main User Target. Mar 20 21:24:16.911432 systemd[1605]: Startup finished in 117ms. Mar 20 21:24:16.911714 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 21:24:16.913740 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 21:24:16.978028 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:58566.service - OpenSSH per-connection server daemon (10.0.0.1:58566). Mar 20 21:24:17.022092 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 58566 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.023155 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.027378 systemd-logind[1468]: New session 2 of user core. Mar 20 21:24:17.042871 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 21:24:17.093540 sshd[1618]: Connection closed by 10.0.0.1 port 58566 Mar 20 21:24:17.093434 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:17.107794 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:58566.service: Deactivated successfully. Mar 20 21:24:17.109078 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 21:24:17.109809 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Mar 20 21:24:17.111316 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:58570.service - OpenSSH per-connection server daemon (10.0.0.1:58570). Mar 20 21:24:17.112094 systemd-logind[1468]: Removed session 2. Mar 20 21:24:17.158185 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 58570 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.159221 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.163142 systemd-logind[1468]: New session 3 of user core. Mar 20 21:24:17.174862 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 21:24:17.221593 sshd[1626]: Connection closed by 10.0.0.1 port 58570 Mar 20 21:24:17.221860 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:17.232736 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:58570.service: Deactivated successfully. Mar 20 21:24:17.234108 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 21:24:17.235334 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Mar 20 21:24:17.236374 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Mar 20 21:24:17.237171 systemd-logind[1468]: Removed session 3. Mar 20 21:24:17.284475 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.285497 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.288939 systemd-logind[1468]: New session 4 of user core. Mar 20 21:24:17.296810 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 21:24:17.346711 sshd[1634]: Connection closed by 10.0.0.1 port 58580 Mar 20 21:24:17.346662 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:17.359629 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:58580.service: Deactivated successfully. Mar 20 21:24:17.361074 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 21:24:17.362263 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Mar 20 21:24:17.363325 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:58590.service - OpenSSH per-connection server daemon (10.0.0.1:58590). Mar 20 21:24:17.363971 systemd-logind[1468]: Removed session 4. Mar 20 21:24:17.410828 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 58590 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.411761 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.415335 systemd-logind[1468]: New session 5 of user core. Mar 20 21:24:17.421797 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 21:24:17.480412 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 21:24:17.482519 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:24:17.496487 sudo[1643]: pam_unix(sudo:session): session closed for user root Mar 20 21:24:17.497789 sshd[1642]: Connection closed by 10.0.0.1 port 58590 Mar 20 21:24:17.498078 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:17.509572 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:58590.service: Deactivated successfully. Mar 20 21:24:17.511069 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 21:24:17.512377 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Mar 20 21:24:17.513537 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Mar 20 21:24:17.514329 systemd-logind[1468]: Removed session 5. Mar 20 21:24:17.560110 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.561159 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.565054 systemd-logind[1468]: New session 6 of user core. Mar 20 21:24:17.575866 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 21:24:17.625604 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 21:24:17.626131 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:24:17.628854 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 20 21:24:17.633006 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 21:24:17.633259 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:24:17.640483 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 21:24:17.671852 augenrules[1675]: No rules Mar 20 21:24:17.672751 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 21:24:17.672952 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 21:24:17.674223 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 20 21:24:17.677630 sshd[1651]: Connection closed by 10.0.0.1 port 58592 Mar 20 21:24:17.677522 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:17.689551 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:58592.service: Deactivated successfully. Mar 20 21:24:17.690797 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 21:24:17.691473 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Mar 20 21:24:17.693057 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:58602.service - OpenSSH per-connection server daemon (10.0.0.1:58602). Mar 20 21:24:17.693754 systemd-logind[1468]: Removed session 6. Mar 20 21:24:17.738029 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 58602 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:24:17.739041 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:24:17.742732 systemd-logind[1468]: New session 7 of user core. Mar 20 21:24:17.753885 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 21:24:17.802640 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 21:24:17.802943 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 21:24:18.139959 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 21:24:18.164008 (dockerd)[1707]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 21:24:18.411582 dockerd[1707]: time="2025-03-20T21:24:18.411453838Z" level=info msg="Starting up" Mar 20 21:24:18.413967 dockerd[1707]: time="2025-03-20T21:24:18.413750425Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 21:24:18.512357 dockerd[1707]: time="2025-03-20T21:24:18.512310588Z" level=info msg="Loading containers: start." Mar 20 21:24:18.642703 kernel: Initializing XFRM netlink socket Mar 20 21:24:18.705029 systemd-networkd[1402]: docker0: Link UP Mar 20 21:24:18.777855 dockerd[1707]: time="2025-03-20T21:24:18.777808915Z" level=info msg="Loading containers: done." Mar 20 21:24:18.791556 dockerd[1707]: time="2025-03-20T21:24:18.791505465Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 21:24:18.791710 dockerd[1707]: time="2025-03-20T21:24:18.791591880Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 21:24:18.791825 dockerd[1707]: time="2025-03-20T21:24:18.791791487Z" level=info msg="Daemon has completed initialization" Mar 20 21:24:18.819114 dockerd[1707]: time="2025-03-20T21:24:18.819057305Z" level=info msg="API listen on /run/docker.sock" Mar 20 21:24:18.819211 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 21:24:20.121948 containerd[1488]: time="2025-03-20T21:24:20.121881940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 20 21:24:20.744063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149289585.mount: Deactivated successfully. Mar 20 21:24:22.287749 containerd[1488]: time="2025-03-20T21:24:22.287691957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:22.288245 containerd[1488]: time="2025-03-20T21:24:22.288196641Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 20 21:24:22.288859 containerd[1488]: time="2025-03-20T21:24:22.288807905Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:22.291322 containerd[1488]: time="2025-03-20T21:24:22.291291983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:22.292885 containerd[1488]: time="2025-03-20T21:24:22.292719026Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.170796382s" Mar 20 21:24:22.292885 containerd[1488]: time="2025-03-20T21:24:22.292759569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 20 21:24:22.307517 containerd[1488]: time="2025-03-20T21:24:22.307483253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 20 21:24:22.625247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 21:24:22.626644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:22.735466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:22.738470 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:24:22.773223 kubelet[1992]: E0320 21:24:22.773176 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:24:22.776390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:24:22.776534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:24:22.776858 systemd[1]: kubelet.service: Consumed 130ms CPU time, 97.1M memory peak. Mar 20 21:24:24.463224 containerd[1488]: time="2025-03-20T21:24:24.463176772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:24.464192 containerd[1488]: time="2025-03-20T21:24:24.463945938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 20 21:24:24.464957 containerd[1488]: time="2025-03-20T21:24:24.464782940Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:24.467253 containerd[1488]: time="2025-03-20T21:24:24.467200737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:24.468190 containerd[1488]: time="2025-03-20T21:24:24.468152320Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 2.160632006s" Mar 20 21:24:24.468240 containerd[1488]: time="2025-03-20T21:24:24.468192661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 20 21:24:24.483847 containerd[1488]: time="2025-03-20T21:24:24.483760562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 20 21:24:26.210057 containerd[1488]: time="2025-03-20T21:24:26.209852191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:26.210914 containerd[1488]: time="2025-03-20T21:24:26.210679441Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 20 21:24:26.211850 containerd[1488]: time="2025-03-20T21:24:26.211812162Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:26.215253 containerd[1488]: time="2025-03-20T21:24:26.215219611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:26.216261 containerd[1488]: time="2025-03-20T21:24:26.216211943Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.73241072s" Mar 20 21:24:26.216261 containerd[1488]: time="2025-03-20T21:24:26.216249602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 20 21:24:26.231056 containerd[1488]: time="2025-03-20T21:24:26.231020842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 20 21:24:27.377975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332512375.mount: Deactivated successfully. Mar 20 21:24:27.603930 containerd[1488]: time="2025-03-20T21:24:27.603780859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:27.604662 containerd[1488]: time="2025-03-20T21:24:27.604448059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 20 21:24:27.605316 containerd[1488]: time="2025-03-20T21:24:27.605259409Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:27.606894 containerd[1488]: time="2025-03-20T21:24:27.606865500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:27.607586 containerd[1488]: time="2025-03-20T21:24:27.607554831Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.376499211s" Mar 20 21:24:27.607662 containerd[1488]: time="2025-03-20T21:24:27.607588487Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 20 21:24:27.622501 containerd[1488]: time="2025-03-20T21:24:27.622459107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 21:24:28.206041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228774462.mount: Deactivated successfully. Mar 20 21:24:29.212012 containerd[1488]: time="2025-03-20T21:24:29.211946405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.212716 containerd[1488]: time="2025-03-20T21:24:29.212642719Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 20 21:24:29.213310 containerd[1488]: time="2025-03-20T21:24:29.213273283Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.216258 containerd[1488]: time="2025-03-20T21:24:29.216225213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.217634 containerd[1488]: time="2025-03-20T21:24:29.217601633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.595099265s" Mar 20 21:24:29.217684 containerd[1488]: time="2025-03-20T21:24:29.217636209Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 20 21:24:29.232338 containerd[1488]: time="2025-03-20T21:24:29.232306339Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 20 21:24:29.644897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571272337.mount: Deactivated successfully. Mar 20 21:24:29.648696 containerd[1488]: time="2025-03-20T21:24:29.648537214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.649994 containerd[1488]: time="2025-03-20T21:24:29.649942487Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 20 21:24:29.650603 containerd[1488]: time="2025-03-20T21:24:29.650552402Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.653002 containerd[1488]: time="2025-03-20T21:24:29.652960847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:29.653765 containerd[1488]: time="2025-03-20T21:24:29.653731514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 421.385917ms" Mar 20 21:24:29.653815 containerd[1488]: time="2025-03-20T21:24:29.653765129Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 20 21:24:29.668221 containerd[1488]: time="2025-03-20T21:24:29.668196232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 20 21:24:30.142881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441310177.mount: Deactivated successfully. Mar 20 21:24:32.865026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 21:24:32.867065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:32.984911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:32.994073 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 21:24:33.036848 kubelet[2160]: E0320 21:24:33.036741 2160 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 21:24:33.039204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 21:24:33.039399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 21:24:33.039925 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.8M memory peak. Mar 20 21:24:33.272735 containerd[1488]: time="2025-03-20T21:24:33.272601395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:33.273568 containerd[1488]: time="2025-03-20T21:24:33.273374262Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 20 21:24:33.274266 containerd[1488]: time="2025-03-20T21:24:33.274216516Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:33.276829 containerd[1488]: time="2025-03-20T21:24:33.276800422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:24:33.277981 containerd[1488]: time="2025-03-20T21:24:33.277932071Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.609704025s" Mar 20 21:24:33.277981 containerd[1488]: time="2025-03-20T21:24:33.277964884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 20 21:24:38.551227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:38.551822 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.8M memory peak. Mar 20 21:24:38.553919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:38.572841 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Mar 20 21:24:38.572854 systemd[1]: Reloading... Mar 20 21:24:38.647712 zram_generator::config[2320]: No configuration found. Mar 20 21:24:38.755424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:24:38.828932 systemd[1]: Reloading finished in 255 ms. Mar 20 21:24:38.884372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:38.886719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:38.888177 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:24:38.888386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:38.888424 systemd[1]: kubelet.service: Consumed 84ms CPU time, 82.4M memory peak. Mar 20 21:24:38.891860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:38.995322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:38.998603 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:24:39.034490 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:24:39.034490 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:24:39.034490 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:24:39.034857 kubelet[2363]: I0320 21:24:39.034537 2363 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:24:39.833602 kubelet[2363]: I0320 21:24:39.833536 2363 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 21:24:39.833602 kubelet[2363]: I0320 21:24:39.833567 2363 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:24:39.833807 kubelet[2363]: I0320 21:24:39.833799 2363 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 21:24:39.870050 kubelet[2363]: E0320 21:24:39.870020 2363 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.870050 kubelet[2363]: I0320 21:24:39.870021 2363 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:24:39.877073 kubelet[2363]: I0320 21:24:39.877054 2363 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:24:39.878263 kubelet[2363]: I0320 21:24:39.878213 2363 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:24:39.878428 kubelet[2363]: I0320 21:24:39.878255 2363 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 21:24:39.878518 kubelet[2363]: I0320 21:24:39.878492 2363 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:24:39.878518 kubelet[2363]: I0320 21:24:39.878500 2363 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 21:24:39.878783 kubelet[2363]: I0320 21:24:39.878758 2363 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:24:39.879824 kubelet[2363]: I0320 21:24:39.879802 2363 kubelet.go:400] "Attempting to sync node with API server" Mar 20 21:24:39.879862 kubelet[2363]: I0320 21:24:39.879830 2363 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:24:39.880128 kubelet[2363]: I0320 21:24:39.880107 2363 kubelet.go:312] "Adding apiserver pod source" Mar 20 21:24:39.880421 kubelet[2363]: I0320 21:24:39.880184 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:24:39.880421 kubelet[2363]: W0320 21:24:39.880333 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.880421 kubelet[2363]: E0320 21:24:39.880387 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.880634 kubelet[2363]: W0320 21:24:39.880579 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.880634 kubelet[2363]: E0320 21:24:39.880628 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.881063 kubelet[2363]: I0320 21:24:39.881047 2363 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:24:39.881400 kubelet[2363]: I0320 21:24:39.881388 2363 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:24:39.881501 kubelet[2363]: W0320 21:24:39.881489 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 21:24:39.883027 kubelet[2363]: I0320 21:24:39.882536 2363 server.go:1264] "Started kubelet" Mar 20 21:24:39.883178 kubelet[2363]: I0320 21:24:39.883130 2363 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:24:39.883976 kubelet[2363]: I0320 21:24:39.883383 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:24:39.883976 kubelet[2363]: I0320 21:24:39.883688 2363 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:24:39.888957 kubelet[2363]: I0320 21:24:39.887201 2363 server.go:455] "Adding debug handlers to kubelet server" Mar 20 21:24:39.888957 kubelet[2363]: I0320 21:24:39.887221 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:24:39.889089 kubelet[2363]: E0320 21:24:39.888915 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9fdd956d36e1 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:24:39.882512097 +0000 UTC m=+0.881066371,LastTimestamp:2025-03-20 21:24:39.882512097 +0000 UTC m=+0.881066371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:24:39.889270 kubelet[2363]: E0320 21:24:39.889253 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:39.889474 kubelet[2363]: I0320 21:24:39.889463 2363 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 21:24:39.889632 kubelet[2363]: I0320 21:24:39.889616 2363 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:24:39.890592 kubelet[2363]: I0320 21:24:39.890576 2363 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:24:39.890999 kubelet[2363]: W0320 21:24:39.890956 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.891109 kubelet[2363]: E0320 21:24:39.891095 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.892460 kubelet[2363]: E0320 21:24:39.892221 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Mar 20 21:24:39.901387 kubelet[2363]: I0320 21:24:39.893341 2363 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:24:39.901387 kubelet[2363]: I0320 21:24:39.893418 2363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:24:39.910542 kubelet[2363]: I0320 21:24:39.910417 2363 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:24:39.914161 kubelet[2363]: E0320 21:24:39.913892 2363 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:24:39.916780 kubelet[2363]: I0320 21:24:39.916743 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:24:39.917847 kubelet[2363]: I0320 21:24:39.917829 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:24:39.918741 kubelet[2363]: I0320 21:24:39.918073 2363 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:24:39.918741 kubelet[2363]: I0320 21:24:39.918109 2363 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 21:24:39.918741 kubelet[2363]: E0320 21:24:39.918149 2363 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:24:39.918741 kubelet[2363]: W0320 21:24:39.918641 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.918943 kubelet[2363]: E0320 21:24:39.918926 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:39.921849 kubelet[2363]: I0320 21:24:39.921833 2363 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:24:39.921938 kubelet[2363]: I0320 21:24:39.921926 2363 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:24:39.922056 kubelet[2363]: I0320 21:24:39.922045 2363 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:24:39.982266 kubelet[2363]: I0320 21:24:39.982237 2363 policy_none.go:49] "None policy: Start" Mar 20 21:24:39.983224 kubelet[2363]: I0320 21:24:39.983193 2363 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:24:39.983224 kubelet[2363]: I0320 21:24:39.983226 2363 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:24:39.989609 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 21:24:39.990862 kubelet[2363]: I0320 21:24:39.990830 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:24:39.991188 kubelet[2363]: E0320 21:24:39.991165 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 20 21:24:40.003317 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 21:24:40.006154 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 21:24:40.016504 kubelet[2363]: I0320 21:24:40.016376 2363 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:24:40.016886 kubelet[2363]: I0320 21:24:40.016568 2363 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:24:40.016886 kubelet[2363]: I0320 21:24:40.016763 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:24:40.017794 kubelet[2363]: E0320 21:24:40.017764 2363 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 21:24:40.018621 kubelet[2363]: I0320 21:24:40.018496 2363 topology_manager.go:215] "Topology Admit Handler" podUID="c1cfd52fbe2595dd4d91898943982fa5" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 21:24:40.019447 kubelet[2363]: I0320 21:24:40.019375 2363 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 21:24:40.020332 kubelet[2363]: I0320 21:24:40.020286 2363 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 21:24:40.027824 systemd[1]: Created slice kubepods-burstable-podc1cfd52fbe2595dd4d91898943982fa5.slice - libcontainer container kubepods-burstable-podc1cfd52fbe2595dd4d91898943982fa5.slice. Mar 20 21:24:40.035783 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 20 21:24:40.045941 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 20 21:24:40.091680 kubelet[2363]: I0320 21:24:40.091556 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:40.091680 kubelet[2363]: I0320 21:24:40.091595 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:40.091680 kubelet[2363]: I0320 21:24:40.091615 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:40.091680 kubelet[2363]: I0320 21:24:40.091640 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:24:40.092873 kubelet[2363]: I0320 21:24:40.092835 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:40.092923 kubelet[2363]: I0320 21:24:40.092889 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:40.092923 kubelet[2363]: I0320 21:24:40.092908 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:40.092972 kubelet[2363]: I0320 21:24:40.092923 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:40.092972 kubelet[2363]: I0320 21:24:40.092943 2363 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:40.093293 kubelet[2363]: E0320 21:24:40.093250 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Mar 20 21:24:40.192605 kubelet[2363]: I0320 21:24:40.192567 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:24:40.192954 kubelet[2363]: E0320 21:24:40.192910 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 20 21:24:40.335378 containerd[1488]: time="2025-03-20T21:24:40.335328381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1cfd52fbe2595dd4d91898943982fa5,Namespace:kube-system,Attempt:0,}" Mar 20 21:24:40.346400 containerd[1488]: time="2025-03-20T21:24:40.346316714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 20 21:24:40.348852 containerd[1488]: time="2025-03-20T21:24:40.348816748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 20 21:24:40.493860 kubelet[2363]: E0320 21:24:40.493789 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Mar 20 21:24:40.594324 kubelet[2363]: I0320 21:24:40.594287 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:24:40.594639 kubelet[2363]: E0320 21:24:40.594615 2363 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 20 21:24:40.622270 kubelet[2363]: E0320 21:24:40.622176 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e9fdd956d36e1 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 21:24:39.882512097 +0000 UTC m=+0.881066371,LastTimestamp:2025-03-20 21:24:39.882512097 +0000 UTC m=+0.881066371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 21:24:40.745264 kubelet[2363]: W0320 21:24:40.745193 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:40.745264 kubelet[2363]: E0320 21:24:40.745260 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:40.763899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147772652.mount: Deactivated successfully. Mar 20 21:24:40.767484 containerd[1488]: time="2025-03-20T21:24:40.767440515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:24:40.771694 containerd[1488]: time="2025-03-20T21:24:40.771613921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:24:40.772500 containerd[1488]: time="2025-03-20T21:24:40.772443825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 21:24:40.773639 containerd[1488]: time="2025-03-20T21:24:40.773573904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 21:24:40.774243 containerd[1488]: time="2025-03-20T21:24:40.774146806Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:24:40.775528 containerd[1488]: time="2025-03-20T21:24:40.775495875Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:24:40.776174 containerd[1488]: time="2025-03-20T21:24:40.775918770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 20 21:24:40.777806 containerd[1488]: time="2025-03-20T21:24:40.777731626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 21:24:40.779072 containerd[1488]: time="2025-03-20T21:24:40.779019515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.854898ms" Mar 20 21:24:40.779696 containerd[1488]: time="2025-03-20T21:24:40.779478541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 441.872876ms" Mar 20 21:24:40.782384 containerd[1488]: time="2025-03-20T21:24:40.782321604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.239049ms" Mar 20 21:24:40.801446 containerd[1488]: time="2025-03-20T21:24:40.801403309Z" level=info msg="connecting to shim 08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e" address="unix:///run/containerd/s/9487413cdbfc009e7754dd3a6da490c7b568a110c0906d1bd3a46e57a3249e0c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:24:40.802505 containerd[1488]: time="2025-03-20T21:24:40.802466567Z" level=info msg="connecting to shim 0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3" address="unix:///run/containerd/s/2e6fc4704dbdee32878f9c42e8bb89f7e905e75908294c067dfb3c84e276257e" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:24:40.809327 containerd[1488]: time="2025-03-20T21:24:40.808816745Z" level=info msg="connecting to shim 8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e" address="unix:///run/containerd/s/9505a8e2579ff0a973122aa273c34c525584f9ec73c7477c29512251497c85b0" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:24:40.813552 kubelet[2363]: W0320 21:24:40.813112 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:40.813552 kubelet[2363]: E0320 21:24:40.813174 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:40.822899 systemd[1]: Started cri-containerd-0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3.scope - libcontainer container 0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3. Mar 20 21:24:40.825961 systemd[1]: Started cri-containerd-08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e.scope - libcontainer container 08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e. Mar 20 21:24:40.831350 systemd[1]: Started cri-containerd-8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e.scope - libcontainer container 8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e. Mar 20 21:24:40.865446 containerd[1488]: time="2025-03-20T21:24:40.865375761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c1cfd52fbe2595dd4d91898943982fa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3\"" Mar 20 21:24:40.866641 containerd[1488]: time="2025-03-20T21:24:40.866578543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e\"" Mar 20 21:24:40.869383 containerd[1488]: time="2025-03-20T21:24:40.869349424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e\"" Mar 20 21:24:40.869984 containerd[1488]: time="2025-03-20T21:24:40.869745149Z" level=info msg="CreateContainer within sandbox \"08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 21:24:40.870258 containerd[1488]: time="2025-03-20T21:24:40.869775239Z" level=info msg="CreateContainer within sandbox \"0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 21:24:40.872728 containerd[1488]: time="2025-03-20T21:24:40.872627145Z" level=info msg="CreateContainer within sandbox \"8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 21:24:40.878252 containerd[1488]: time="2025-03-20T21:24:40.878218002Z" level=info msg="Container f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:24:40.882874 containerd[1488]: time="2025-03-20T21:24:40.882829908Z" level=info msg="Container b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:24:40.885617 containerd[1488]: time="2025-03-20T21:24:40.885570059Z" level=info msg="Container 0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:24:40.889054 containerd[1488]: time="2025-03-20T21:24:40.889019915Z" level=info msg="CreateContainer within sandbox \"08f723a35ca8fde238ee268eb82f5869869111cb8f16038592ff3e919c3db31e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90\"" Mar 20 21:24:40.889864 containerd[1488]: time="2025-03-20T21:24:40.889746026Z" level=info msg="StartContainer for \"f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90\"" Mar 20 21:24:40.891030 containerd[1488]: time="2025-03-20T21:24:40.890947208Z" level=info msg="connecting to shim f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90" address="unix:///run/containerd/s/9487413cdbfc009e7754dd3a6da490c7b568a110c0906d1bd3a46e57a3249e0c" protocol=ttrpc version=3 Mar 20 21:24:40.891155 containerd[1488]: time="2025-03-20T21:24:40.891122984Z" level=info msg="CreateContainer within sandbox \"0e6c49b34a30ae7d99b420e9a101d3c45cb60402b6c6159bc0aef129b504caa3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d\"" Mar 20 21:24:40.891527 containerd[1488]: time="2025-03-20T21:24:40.891490741Z" level=info msg="StartContainer for \"b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d\"" Mar 20 21:24:40.892898 containerd[1488]: time="2025-03-20T21:24:40.892869859Z" level=info msg="connecting to shim b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d" address="unix:///run/containerd/s/2e6fc4704dbdee32878f9c42e8bb89f7e905e75908294c067dfb3c84e276257e" protocol=ttrpc version=3 Mar 20 21:24:40.893347 containerd[1488]: time="2025-03-20T21:24:40.893315520Z" level=info msg="CreateContainer within sandbox \"8d9f08e46515828c0f5ffe8225dfd2fe0112f20c402c7d98ead6acd0c8b5f73e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95\"" Mar 20 21:24:40.893968 containerd[1488]: time="2025-03-20T21:24:40.893936798Z" level=info msg="StartContainer for \"0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95\"" Mar 20 21:24:40.895862 containerd[1488]: time="2025-03-20T21:24:40.894979409Z" level=info msg="connecting to shim 0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95" address="unix:///run/containerd/s/9505a8e2579ff0a973122aa273c34c525584f9ec73c7477c29512251497c85b0" protocol=ttrpc version=3 Mar 20 21:24:40.909864 systemd[1]: Started cri-containerd-f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90.scope - libcontainer container f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90. Mar 20 21:24:40.912317 systemd[1]: Started cri-containerd-b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d.scope - libcontainer container b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d. Mar 20 21:24:40.916043 systemd[1]: Started cri-containerd-0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95.scope - libcontainer container 0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95. Mar 20 21:24:40.964663 containerd[1488]: time="2025-03-20T21:24:40.962589417Z" level=info msg="StartContainer for \"0503a890e0bf4cc0022a3cbe4870684b35c638b24ca0020defb27329766cbc95\" returns successfully" Mar 20 21:24:40.964663 containerd[1488]: time="2025-03-20T21:24:40.964409355Z" level=info msg="StartContainer for \"f106b1772b860d5ac634f35d465578fee171090c0acd192aba9db4f079c99c90\" returns successfully" Mar 20 21:24:40.965742 containerd[1488]: time="2025-03-20T21:24:40.965643948Z" level=info msg="StartContainer for \"b0305685025e2887cab08a8a809740d1d5ac882767e3df70a1838134083c705d\" returns successfully" Mar 20 21:24:41.111394 kubelet[2363]: W0320 21:24:41.111239 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:41.111394 kubelet[2363]: E0320 21:24:41.111321 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:41.181116 kubelet[2363]: W0320 21:24:41.180938 2363 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:41.181116 kubelet[2363]: E0320 21:24:41.181011 2363 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Mar 20 21:24:41.396695 kubelet[2363]: I0320 21:24:41.396619 2363 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:24:42.715195 kubelet[2363]: E0320 21:24:42.715163 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 21:24:42.795325 kubelet[2363]: I0320 21:24:42.795285 2363 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 21:24:42.808188 kubelet[2363]: E0320 21:24:42.808161 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:42.909208 kubelet[2363]: E0320 21:24:42.909163 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.009826 kubelet[2363]: E0320 21:24:43.009721 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.110880 kubelet[2363]: E0320 21:24:43.110826 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.211358 kubelet[2363]: E0320 21:24:43.211294 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.311822 kubelet[2363]: E0320 21:24:43.311713 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.412058 kubelet[2363]: E0320 21:24:43.412017 2363 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 21:24:43.883327 kubelet[2363]: I0320 21:24:43.883280 2363 apiserver.go:52] "Watching apiserver" Mar 20 21:24:43.890809 kubelet[2363]: I0320 21:24:43.890752 2363 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:24:44.943750 systemd[1]: Reload requested from client PID 2639 ('systemctl') (unit session-7.scope)... Mar 20 21:24:44.944102 systemd[1]: Reloading... Mar 20 21:24:45.015755 zram_generator::config[2686]: No configuration found. Mar 20 21:24:45.095395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 21:24:45.178791 systemd[1]: Reloading finished in 234 ms. Mar 20 21:24:45.196178 kubelet[2363]: I0320 21:24:45.196089 2363 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:24:45.196420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:45.211483 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 21:24:45.211792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:45.211843 systemd[1]: kubelet.service: Consumed 1.298s CPU time, 114.2M memory peak. Mar 20 21:24:45.213446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 21:24:45.337390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 21:24:45.340854 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 21:24:45.381804 kubelet[2725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:24:45.381804 kubelet[2725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 21:24:45.381804 kubelet[2725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 21:24:45.382126 kubelet[2725]: I0320 21:24:45.381835 2725 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 21:24:45.385647 kubelet[2725]: I0320 21:24:45.385620 2725 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 20 21:24:45.386698 kubelet[2725]: I0320 21:24:45.385758 2725 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 21:24:45.386698 kubelet[2725]: I0320 21:24:45.385926 2725 server.go:927] "Client rotation is on, will bootstrap in background" Mar 20 21:24:45.387269 kubelet[2725]: I0320 21:24:45.387246 2725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 21:24:45.388467 kubelet[2725]: I0320 21:24:45.388437 2725 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 21:24:45.395588 kubelet[2725]: I0320 21:24:45.395556 2725 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 21:24:45.395857 kubelet[2725]: I0320 21:24:45.395826 2725 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 21:24:45.396026 kubelet[2725]: I0320 21:24:45.395854 2725 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 20 21:24:45.396026 kubelet[2725]: I0320 21:24:45.396012 2725 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 21:24:45.396026 kubelet[2725]: I0320 21:24:45.396027 2725 container_manager_linux.go:301] "Creating device plugin manager" Mar 20 21:24:45.396149 kubelet[2725]: I0320 21:24:45.396059 2725 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:24:45.396185 kubelet[2725]: I0320 21:24:45.396168 2725 kubelet.go:400] "Attempting to sync node with API server" Mar 20 21:24:45.396215 kubelet[2725]: I0320 21:24:45.396189 2725 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.396618 2725 kubelet.go:312] "Adding apiserver pod source" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.396653 2725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.397398 2725 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.397564 2725 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.398016 2725 server.go:1264] "Started kubelet" Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.398920 2725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 21:24:45.400359 kubelet[2725]: I0320 21:24:45.399136 2725 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 21:24:45.400699 kubelet[2725]: I0320 21:24:45.400441 2725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 21:24:45.400699 kubelet[2725]: I0320 21:24:45.400581 2725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 21:24:45.401609 kubelet[2725]: I0320 21:24:45.401572 2725 server.go:455] "Adding debug handlers to kubelet server" Mar 20 21:24:45.411805 kubelet[2725]: I0320 21:24:45.411778 2725 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 20 21:24:45.412649 kubelet[2725]: I0320 21:24:45.412625 2725 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 20 21:24:45.413879 kubelet[2725]: I0320 21:24:45.413855 2725 reconciler.go:26] "Reconciler: start to sync state" Mar 20 21:24:45.414393 kubelet[2725]: I0320 21:24:45.414363 2725 factory.go:221] Registration of the systemd container factory successfully Mar 20 21:24:45.414477 kubelet[2725]: I0320 21:24:45.414447 2725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 21:24:45.416867 kubelet[2725]: E0320 21:24:45.416830 2725 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 21:24:45.417372 kubelet[2725]: I0320 21:24:45.417335 2725 factory.go:221] Registration of the containerd container factory successfully Mar 20 21:24:45.423637 kubelet[2725]: I0320 21:24:45.423602 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 21:24:45.425458 kubelet[2725]: I0320 21:24:45.425439 2725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 21:24:45.425710 kubelet[2725]: I0320 21:24:45.425559 2725 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 21:24:45.425710 kubelet[2725]: I0320 21:24:45.425581 2725 kubelet.go:2337] "Starting kubelet main sync loop" Mar 20 21:24:45.425710 kubelet[2725]: E0320 21:24:45.425628 2725 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449192 2725 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449211 2725 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449232 2725 state_mem.go:36] "Initialized new in-memory state store" Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449430 2725 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449442 2725 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 21:24:45.449771 kubelet[2725]: I0320 21:24:45.449461 2725 policy_none.go:49] "None policy: Start" Mar 20 21:24:45.450664 kubelet[2725]: I0320 21:24:45.450637 2725 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 21:24:45.450724 kubelet[2725]: I0320 21:24:45.450688 2725 state_mem.go:35] "Initializing new in-memory state store" Mar 20 21:24:45.451035 kubelet[2725]: I0320 21:24:45.451012 2725 state_mem.go:75] "Updated machine memory state" Mar 20 21:24:45.455388 kubelet[2725]: I0320 21:24:45.455360 2725 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 21:24:45.455567 kubelet[2725]: I0320 21:24:45.455525 2725 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 21:24:45.455700 kubelet[2725]: I0320 21:24:45.455628 2725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 21:24:45.514098 kubelet[2725]: I0320 21:24:45.514071 2725 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 20 21:24:45.519677 kubelet[2725]: I0320 21:24:45.519636 2725 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 20 21:24:45.519833 kubelet[2725]: I0320 21:24:45.519727 2725 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 20 21:24:45.526126 kubelet[2725]: I0320 21:24:45.526002 2725 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 20 21:24:45.526126 kubelet[2725]: I0320 21:24:45.526111 2725 topology_manager.go:215] "Topology Admit Handler" podUID="c1cfd52fbe2595dd4d91898943982fa5" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 20 21:24:45.526278 kubelet[2725]: I0320 21:24:45.526146 2725 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 20 21:24:45.532321 kubelet[2725]: E0320 21:24:45.532149 2725 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.534020 kubelet[2725]: E0320 21:24:45.533989 2725 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:45.614376 kubelet[2725]: I0320 21:24:45.614328 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.614376 kubelet[2725]: I0320 21:24:45.614384 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 20 21:24:45.614543 kubelet[2725]: I0320 21:24:45.614444 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:45.614543 kubelet[2725]: I0320 21:24:45.614480 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:45.614543 kubelet[2725]: I0320 21:24:45.614505 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.614543 kubelet[2725]: I0320 21:24:45.614523 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.614543 kubelet[2725]: I0320 21:24:45.614540 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1cfd52fbe2595dd4d91898943982fa5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c1cfd52fbe2595dd4d91898943982fa5\") " pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:45.614650 kubelet[2725]: I0320 21:24:45.614556 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.614650 kubelet[2725]: I0320 21:24:45.614570 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 21:24:45.946289 sudo[2757]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 21:24:45.946579 sudo[2757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 21:24:46.370663 sudo[2757]: pam_unix(sudo:session): session closed for user root Mar 20 21:24:46.397375 kubelet[2725]: I0320 21:24:46.397299 2725 apiserver.go:52] "Watching apiserver" Mar 20 21:24:46.413610 kubelet[2725]: I0320 21:24:46.413547 2725 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 20 21:24:46.442494 kubelet[2725]: E0320 21:24:46.442076 2725 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 20 21:24:46.459940 kubelet[2725]: I0320 21:24:46.459881 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.459867207 podStartE2EDuration="3.459867207s" podCreationTimestamp="2025-03-20 21:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:24:46.459684118 +0000 UTC m=+1.115880397" watchObservedRunningTime="2025-03-20 21:24:46.459867207 +0000 UTC m=+1.116063486" Mar 20 21:24:46.460076 kubelet[2725]: I0320 21:24:46.459985 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.459979996 podStartE2EDuration="1.459979996s" podCreationTimestamp="2025-03-20 21:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:24:46.452299498 +0000 UTC m=+1.108495777" watchObservedRunningTime="2025-03-20 21:24:46.459979996 +0000 UTC m=+1.116176275" Mar 20 21:24:46.473533 kubelet[2725]: I0320 21:24:46.473346 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.473333304 podStartE2EDuration="3.473333304s" podCreationTimestamp="2025-03-20 21:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:24:46.466015062 +0000 UTC m=+1.122211341" watchObservedRunningTime="2025-03-20 21:24:46.473333304 +0000 UTC m=+1.129529583" Mar 20 21:24:48.526930 sudo[1687]: pam_unix(sudo:session): session closed for user root Mar 20 21:24:48.528346 sshd[1686]: Connection closed by 10.0.0.1 port 58602 Mar 20 21:24:48.529033 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Mar 20 21:24:48.532127 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:58602.service: Deactivated successfully. Mar 20 21:24:48.534263 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 21:24:48.534556 systemd[1]: session-7.scope: Consumed 8.000s CPU time, 279.9M memory peak. Mar 20 21:24:48.536621 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Mar 20 21:24:48.537815 systemd-logind[1468]: Removed session 7. Mar 20 21:24:56.123655 update_engine[1471]: I20250320 21:24:56.123576 1471 update_attempter.cc:509] Updating boot flags... Mar 20 21:24:56.158738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2807) Mar 20 21:24:56.196792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2806) Mar 20 21:25:00.628239 kubelet[2725]: I0320 21:25:00.628199 2725 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 21:25:00.631217 containerd[1488]: time="2025-03-20T21:25:00.631180941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 21:25:00.631459 kubelet[2725]: I0320 21:25:00.631345 2725 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 21:25:01.384211 kubelet[2725]: I0320 21:25:01.384135 2725 topology_manager.go:215] "Topology Admit Handler" podUID="05f6e138-eb14-4fc7-856f-6cf89744aa20" podNamespace="kube-system" podName="kube-proxy-rrv79" Mar 20 21:25:01.384390 kubelet[2725]: I0320 21:25:01.384305 2725 topology_manager.go:215] "Topology Admit Handler" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" podNamespace="kube-system" podName="cilium-njkwv" Mar 20 21:25:01.402340 systemd[1]: Created slice kubepods-besteffort-pod05f6e138_eb14_4fc7_856f_6cf89744aa20.slice - libcontainer container kubepods-besteffort-pod05f6e138_eb14_4fc7_856f_6cf89744aa20.slice. Mar 20 21:25:01.421513 systemd[1]: Created slice kubepods-burstable-pode1919ba0_6ff3_4fb9_8926_5d58a4ed86bf.slice - libcontainer container kubepods-burstable-pode1919ba0_6ff3_4fb9_8926_5d58a4ed86bf.slice. Mar 20 21:25:01.512405 kubelet[2725]: I0320 21:25:01.512332 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-xtables-lock\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512405 kubelet[2725]: I0320 21:25:01.512379 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6f9z\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-kube-api-access-f6f9z\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512405 kubelet[2725]: I0320 21:25:01.512405 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hostproc\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512422 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-run\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512439 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-clustermesh-secrets\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512456 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hubble-tls\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512473 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f6e138-eb14-4fc7-856f-6cf89744aa20-lib-modules\") pod \"kube-proxy-rrv79\" (UID: \"05f6e138-eb14-4fc7-856f-6cf89744aa20\") " pod="kube-system/kube-proxy-rrv79" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512487 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-cgroup\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512756 kubelet[2725]: I0320 21:25:01.512549 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-net\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512583 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05f6e138-eb14-4fc7-856f-6cf89744aa20-kube-proxy\") pod \"kube-proxy-rrv79\" (UID: \"05f6e138-eb14-4fc7-856f-6cf89744aa20\") " pod="kube-system/kube-proxy-rrv79" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512599 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cni-path\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512627 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjh64\" (UniqueName: \"kubernetes.io/projected/05f6e138-eb14-4fc7-856f-6cf89744aa20-kube-api-access-gjh64\") pod \"kube-proxy-rrv79\" (UID: \"05f6e138-eb14-4fc7-856f-6cf89744aa20\") " pod="kube-system/kube-proxy-rrv79" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512651 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-bpf-maps\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512694 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-config-path\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.512934 kubelet[2725]: I0320 21:25:01.512716 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05f6e138-eb14-4fc7-856f-6cf89744aa20-xtables-lock\") pod \"kube-proxy-rrv79\" (UID: \"05f6e138-eb14-4fc7-856f-6cf89744aa20\") " pod="kube-system/kube-proxy-rrv79" Mar 20 21:25:01.513059 kubelet[2725]: I0320 21:25:01.512733 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-etc-cni-netd\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.513059 kubelet[2725]: I0320 21:25:01.512751 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-lib-modules\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.513059 kubelet[2725]: I0320 21:25:01.512768 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-kernel\") pod \"cilium-njkwv\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " pod="kube-system/cilium-njkwv" Mar 20 21:25:01.722243 containerd[1488]: time="2025-03-20T21:25:01.722122958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrv79,Uid:05f6e138-eb14-4fc7-856f-6cf89744aa20,Namespace:kube-system,Attempt:0,}" Mar 20 21:25:01.725842 containerd[1488]: time="2025-03-20T21:25:01.725793517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njkwv,Uid:e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf,Namespace:kube-system,Attempt:0,}" Mar 20 21:25:01.742414 containerd[1488]: time="2025-03-20T21:25:01.742366702Z" level=info msg="connecting to shim 4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec" address="unix:///run/containerd/s/b8df0f0902c2898eaccb98d067a4ca48e9c69f9d26048921819a126a0d81f3e9" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:25:01.752696 containerd[1488]: time="2025-03-20T21:25:01.751982631Z" level=info msg="connecting to shim 8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:25:01.767924 systemd[1]: Started cri-containerd-4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec.scope - libcontainer container 4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec. Mar 20 21:25:01.775392 systemd[1]: Started cri-containerd-8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa.scope - libcontainer container 8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa. Mar 20 21:25:01.783200 kubelet[2725]: I0320 21:25:01.782276 2725 topology_manager.go:215] "Topology Admit Handler" podUID="abd4b00c-7c04-4be1-ba49-bd313c3e13de" podNamespace="kube-system" podName="cilium-operator-599987898-j7w2c" Mar 20 21:25:01.792709 systemd[1]: Created slice kubepods-besteffort-podabd4b00c_7c04_4be1_ba49_bd313c3e13de.slice - libcontainer container kubepods-besteffort-podabd4b00c_7c04_4be1_ba49_bd313c3e13de.slice. Mar 20 21:25:01.815116 kubelet[2725]: I0320 21:25:01.815048 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n227p\" (UniqueName: \"kubernetes.io/projected/abd4b00c-7c04-4be1-ba49-bd313c3e13de-kube-api-access-n227p\") pod \"cilium-operator-599987898-j7w2c\" (UID: \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\") " pod="kube-system/cilium-operator-599987898-j7w2c" Mar 20 21:25:01.815271 kubelet[2725]: I0320 21:25:01.815122 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abd4b00c-7c04-4be1-ba49-bd313c3e13de-cilium-config-path\") pod \"cilium-operator-599987898-j7w2c\" (UID: \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\") " pod="kube-system/cilium-operator-599987898-j7w2c" Mar 20 21:25:01.825996 containerd[1488]: time="2025-03-20T21:25:01.825943981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njkwv,Uid:e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\"" Mar 20 21:25:01.831320 containerd[1488]: time="2025-03-20T21:25:01.831286053Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 21:25:01.835210 containerd[1488]: time="2025-03-20T21:25:01.835134761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rrv79,Uid:05f6e138-eb14-4fc7-856f-6cf89744aa20,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec\"" Mar 20 21:25:01.837831 containerd[1488]: time="2025-03-20T21:25:01.837792514Z" level=info msg="CreateContainer within sandbox \"4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 21:25:01.852298 containerd[1488]: time="2025-03-20T21:25:01.852248113Z" level=info msg="Container 72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:01.859337 containerd[1488]: time="2025-03-20T21:25:01.859239254Z" level=info msg="CreateContainer within sandbox \"4ad65a82c35652682e99ead2b14d046afbf0400af5cd4bb150493e73939df5ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854\"" Mar 20 21:25:01.860941 containerd[1488]: time="2025-03-20T21:25:01.859798306Z" level=info msg="StartContainer for \"72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854\"" Mar 20 21:25:01.861139 containerd[1488]: time="2025-03-20T21:25:01.861103278Z" level=info msg="connecting to shim 72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854" address="unix:///run/containerd/s/b8df0f0902c2898eaccb98d067a4ca48e9c69f9d26048921819a126a0d81f3e9" protocol=ttrpc version=3 Mar 20 21:25:01.885864 systemd[1]: Started cri-containerd-72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854.scope - libcontainer container 72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854. Mar 20 21:25:01.932002 containerd[1488]: time="2025-03-20T21:25:01.928863816Z" level=info msg="StartContainer for \"72ef5fe20bfb832b97c89dcf5c47f9015fa956f0343413d53e0f0b25fac3e854\" returns successfully" Mar 20 21:25:02.100302 containerd[1488]: time="2025-03-20T21:25:02.100188107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-j7w2c,Uid:abd4b00c-7c04-4be1-ba49-bd313c3e13de,Namespace:kube-system,Attempt:0,}" Mar 20 21:25:02.117656 containerd[1488]: time="2025-03-20T21:25:02.117603300Z" level=info msg="connecting to shim 1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7" address="unix:///run/containerd/s/5dd66e77c91ca65eaafabd7e74613e7c0b03315ff2e53349c12294db9319635c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:25:02.142824 systemd[1]: Started cri-containerd-1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7.scope - libcontainer container 1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7. Mar 20 21:25:02.178423 containerd[1488]: time="2025-03-20T21:25:02.178381629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-j7w2c,Uid:abd4b00c-7c04-4be1-ba49-bd313c3e13de,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\"" Mar 20 21:25:02.473245 kubelet[2725]: I0320 21:25:02.472918 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rrv79" podStartSLOduration=1.47290147 podStartE2EDuration="1.47290147s" podCreationTimestamp="2025-03-20 21:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:25:02.472767769 +0000 UTC m=+17.128964048" watchObservedRunningTime="2025-03-20 21:25:02.47290147 +0000 UTC m=+17.129097749" Mar 20 21:25:09.206311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299405757.mount: Deactivated successfully. Mar 20 21:25:10.445659 containerd[1488]: time="2025-03-20T21:25:10.445592395Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:25:10.446145 containerd[1488]: time="2025-03-20T21:25:10.446012326Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 20 21:25:10.446903 containerd[1488]: time="2025-03-20T21:25:10.446873432Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:25:10.448212 containerd[1488]: time="2025-03-20T21:25:10.448176992Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.616716112s" Mar 20 21:25:10.448259 containerd[1488]: time="2025-03-20T21:25:10.448214597Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 20 21:25:10.451365 containerd[1488]: time="2025-03-20T21:25:10.451041223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 21:25:10.452254 containerd[1488]: time="2025-03-20T21:25:10.452209766Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:25:10.458565 containerd[1488]: time="2025-03-20T21:25:10.457931588Z" level=info msg="Container c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:10.461666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211842238.mount: Deactivated successfully. Mar 20 21:25:10.463478 containerd[1488]: time="2025-03-20T21:25:10.463417501Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\"" Mar 20 21:25:10.464460 containerd[1488]: time="2025-03-20T21:25:10.464437666Z" level=info msg="StartContainer for \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\"" Mar 20 21:25:10.465549 containerd[1488]: time="2025-03-20T21:25:10.465460672Z" level=info msg="connecting to shim c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" protocol=ttrpc version=3 Mar 20 21:25:10.506836 systemd[1]: Started cri-containerd-c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6.scope - libcontainer container c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6. Mar 20 21:25:10.542739 containerd[1488]: time="2025-03-20T21:25:10.542613973Z" level=info msg="StartContainer for \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" returns successfully" Mar 20 21:25:10.569953 systemd[1]: cri-containerd-c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6.scope: Deactivated successfully. Mar 20 21:25:10.586922 containerd[1488]: time="2025-03-20T21:25:10.586873481Z" level=info msg="received exit event container_id:\"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" id:\"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" pid:3152 exited_at:{seconds:1742505910 nanos:583326606}" Mar 20 21:25:10.587053 containerd[1488]: time="2025-03-20T21:25:10.586940169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" id:\"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" pid:3152 exited_at:{seconds:1742505910 nanos:583326606}" Mar 20 21:25:10.622298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6-rootfs.mount: Deactivated successfully. Mar 20 21:25:11.484025 containerd[1488]: time="2025-03-20T21:25:11.483921360Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:25:11.493755 containerd[1488]: time="2025-03-20T21:25:11.493128454Z" level=info msg="Container c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:11.496254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764692661.mount: Deactivated successfully. Mar 20 21:25:11.499931 containerd[1488]: time="2025-03-20T21:25:11.499892778Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\"" Mar 20 21:25:11.501222 containerd[1488]: time="2025-03-20T21:25:11.500298706Z" level=info msg="StartContainer for \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\"" Mar 20 21:25:11.502353 containerd[1488]: time="2025-03-20T21:25:11.502185210Z" level=info msg="connecting to shim c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" protocol=ttrpc version=3 Mar 20 21:25:11.524822 systemd[1]: Started cri-containerd-c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457.scope - libcontainer container c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457. Mar 20 21:25:11.547462 containerd[1488]: time="2025-03-20T21:25:11.547432746Z" level=info msg="StartContainer for \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" returns successfully" Mar 20 21:25:11.581102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 21:25:11.581308 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:25:11.584320 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:25:11.586990 containerd[1488]: time="2025-03-20T21:25:11.586949200Z" level=info msg="received exit event container_id:\"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" id:\"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" pid:3198 exited_at:{seconds:1742505911 nanos:586404536}" Mar 20 21:25:11.587081 containerd[1488]: time="2025-03-20T21:25:11.587034130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" id:\"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" pid:3198 exited_at:{seconds:1742505911 nanos:586404536}" Mar 20 21:25:11.587139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 21:25:11.588610 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 21:25:11.589007 systemd[1]: cri-containerd-c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457.scope: Deactivated successfully. Mar 20 21:25:11.608903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457-rootfs.mount: Deactivated successfully. Mar 20 21:25:11.623501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 21:25:12.121098 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:43836.service - OpenSSH per-connection server daemon (10.0.0.1:43836). Mar 20 21:25:12.176989 sshd[3242]: Accepted publickey for core from 10.0.0.1 port 43836 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:12.178145 sshd-session[3242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:12.182635 systemd-logind[1468]: New session 8 of user core. Mar 20 21:25:12.193985 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 21:25:12.331085 sshd[3248]: Connection closed by 10.0.0.1 port 43836 Mar 20 21:25:12.331568 sshd-session[3242]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:12.335070 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:43836.service: Deactivated successfully. Mar 20 21:25:12.336782 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 21:25:12.337559 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Mar 20 21:25:12.338895 systemd-logind[1468]: Removed session 8. Mar 20 21:25:12.450908 containerd[1488]: time="2025-03-20T21:25:12.450798762Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:25:12.451772 containerd[1488]: time="2025-03-20T21:25:12.451722708Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 20 21:25:12.452463 containerd[1488]: time="2025-03-20T21:25:12.452195962Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 21:25:12.453513 containerd[1488]: time="2025-03-20T21:25:12.453406462Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.002329635s" Mar 20 21:25:12.453513 containerd[1488]: time="2025-03-20T21:25:12.453440226Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 20 21:25:12.455455 containerd[1488]: time="2025-03-20T21:25:12.455428614Z" level=info msg="CreateContainer within sandbox \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 21:25:12.460518 containerd[1488]: time="2025-03-20T21:25:12.460474475Z" level=info msg="Container 5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:12.465251 containerd[1488]: time="2025-03-20T21:25:12.465209140Z" level=info msg="CreateContainer within sandbox \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\"" Mar 20 21:25:12.465768 containerd[1488]: time="2025-03-20T21:25:12.465743042Z" level=info msg="StartContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\"" Mar 20 21:25:12.466496 containerd[1488]: time="2025-03-20T21:25:12.466474006Z" level=info msg="connecting to shim 5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c" address="unix:///run/containerd/s/5dd66e77c91ca65eaafabd7e74613e7c0b03315ff2e53349c12294db9319635c" protocol=ttrpc version=3 Mar 20 21:25:12.488853 systemd[1]: Started cri-containerd-5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c.scope - libcontainer container 5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c. Mar 20 21:25:12.500175 containerd[1488]: time="2025-03-20T21:25:12.500138320Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:25:12.541853 containerd[1488]: time="2025-03-20T21:25:12.541789114Z" level=info msg="StartContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" returns successfully" Mar 20 21:25:12.548730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302877986.mount: Deactivated successfully. Mar 20 21:25:12.564054 containerd[1488]: time="2025-03-20T21:25:12.562991434Z" level=info msg="Container b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:12.572764 containerd[1488]: time="2025-03-20T21:25:12.572731835Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\"" Mar 20 21:25:12.573468 containerd[1488]: time="2025-03-20T21:25:12.573442237Z" level=info msg="StartContainer for \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\"" Mar 20 21:25:12.575008 containerd[1488]: time="2025-03-20T21:25:12.574983214Z" level=info msg="connecting to shim b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" protocol=ttrpc version=3 Mar 20 21:25:12.596848 systemd[1]: Started cri-containerd-b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773.scope - libcontainer container b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773. Mar 20 21:25:12.650981 systemd[1]: cri-containerd-b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773.scope: Deactivated successfully. Mar 20 21:25:12.653256 containerd[1488]: time="2025-03-20T21:25:12.653218658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" id:\"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" pid:3308 exited_at:{seconds:1742505912 nanos:652096209}" Mar 20 21:25:12.663069 containerd[1488]: time="2025-03-20T21:25:12.663033708Z" level=info msg="received exit event container_id:\"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" id:\"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" pid:3308 exited_at:{seconds:1742505912 nanos:652096209}" Mar 20 21:25:12.666074 containerd[1488]: time="2025-03-20T21:25:12.666026092Z" level=info msg="StartContainer for \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" returns successfully" Mar 20 21:25:13.503348 containerd[1488]: time="2025-03-20T21:25:13.503309773Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:25:13.508200 kubelet[2725]: I0320 21:25:13.508097 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-j7w2c" podStartSLOduration=2.234441321 podStartE2EDuration="12.508080705s" podCreationTimestamp="2025-03-20 21:25:01 +0000 UTC" firstStartedPulling="2025-03-20 21:25:02.180461798 +0000 UTC m=+16.836658037" lastFinishedPulling="2025-03-20 21:25:12.454101142 +0000 UTC m=+27.110297421" observedRunningTime="2025-03-20 21:25:13.507332301 +0000 UTC m=+28.163528580" watchObservedRunningTime="2025-03-20 21:25:13.508080705 +0000 UTC m=+28.164276984" Mar 20 21:25:13.515310 containerd[1488]: time="2025-03-20T21:25:13.515234742Z" level=info msg="Container bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:13.522120 containerd[1488]: time="2025-03-20T21:25:13.522083426Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\"" Mar 20 21:25:13.522589 containerd[1488]: time="2025-03-20T21:25:13.522529315Z" level=info msg="StartContainer for \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\"" Mar 20 21:25:13.524340 containerd[1488]: time="2025-03-20T21:25:13.524311194Z" level=info msg="connecting to shim bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" protocol=ttrpc version=3 Mar 20 21:25:13.547800 systemd[1]: Started cri-containerd-bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c.scope - libcontainer container bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c. Mar 20 21:25:13.566489 systemd[1]: cri-containerd-bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c.scope: Deactivated successfully. Mar 20 21:25:13.567112 containerd[1488]: time="2025-03-20T21:25:13.567070322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" id:\"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" pid:3349 exited_at:{seconds:1742505913 nanos:566728603}" Mar 20 21:25:13.568443 containerd[1488]: time="2025-03-20T21:25:13.568399470Z" level=info msg="received exit event container_id:\"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" id:\"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" pid:3349 exited_at:{seconds:1742505913 nanos:566728603}" Mar 20 21:25:13.574335 containerd[1488]: time="2025-03-20T21:25:13.574310169Z" level=info msg="StartContainer for \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" returns successfully" Mar 20 21:25:13.582487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c-rootfs.mount: Deactivated successfully. Mar 20 21:25:14.508411 containerd[1488]: time="2025-03-20T21:25:14.508332900Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:25:14.518191 containerd[1488]: time="2025-03-20T21:25:14.518142080Z" level=info msg="Container 987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:14.528856 containerd[1488]: time="2025-03-20T21:25:14.528748185Z" level=info msg="CreateContainer within sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\"" Mar 20 21:25:14.533175 containerd[1488]: time="2025-03-20T21:25:14.531923808Z" level=info msg="StartContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\"" Mar 20 21:25:14.533175 containerd[1488]: time="2025-03-20T21:25:14.533009446Z" level=info msg="connecting to shim 987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8" address="unix:///run/containerd/s/66b3bceef8518e825a1fffd33ed45fb552ae2f1258cd02dd06069e6ff9017296" protocol=ttrpc version=3 Mar 20 21:25:14.568821 systemd[1]: Started cri-containerd-987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8.scope - libcontainer container 987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8. Mar 20 21:25:14.596152 containerd[1488]: time="2025-03-20T21:25:14.596111541Z" level=info msg="StartContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" returns successfully" Mar 20 21:25:14.689549 containerd[1488]: time="2025-03-20T21:25:14.689400218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" id:\"66495a72b3dda847c025803fbaa997e68a812c41ad4c758538aabba02e50d25e\" pid:3414 exited_at:{seconds:1742505914 nanos:689065061}" Mar 20 21:25:14.762621 kubelet[2725]: I0320 21:25:14.761889 2725 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 20 21:25:14.785189 kubelet[2725]: I0320 21:25:14.785149 2725 topology_manager.go:215] "Topology Admit Handler" podUID="d6172a06-d562-425e-a828-943a1b20b0b1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-csbqr" Mar 20 21:25:14.787014 kubelet[2725]: I0320 21:25:14.786746 2725 topology_manager.go:215] "Topology Admit Handler" podUID="3d33b4d7-011b-40dc-9257-7b5c3419a746" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xhtbn" Mar 20 21:25:14.801899 systemd[1]: Created slice kubepods-burstable-podd6172a06_d562_425e_a828_943a1b20b0b1.slice - libcontainer container kubepods-burstable-podd6172a06_d562_425e_a828_943a1b20b0b1.slice. Mar 20 21:25:14.809398 systemd[1]: Created slice kubepods-burstable-pod3d33b4d7_011b_40dc_9257_7b5c3419a746.slice - libcontainer container kubepods-burstable-pod3d33b4d7_011b_40dc_9257_7b5c3419a746.slice. Mar 20 21:25:14.905019 kubelet[2725]: I0320 21:25:14.904977 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6172a06-d562-425e-a828-943a1b20b0b1-config-volume\") pod \"coredns-7db6d8ff4d-csbqr\" (UID: \"d6172a06-d562-425e-a828-943a1b20b0b1\") " pod="kube-system/coredns-7db6d8ff4d-csbqr" Mar 20 21:25:14.905288 kubelet[2725]: I0320 21:25:14.905178 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z848\" (UniqueName: \"kubernetes.io/projected/3d33b4d7-011b-40dc-9257-7b5c3419a746-kube-api-access-4z848\") pod \"coredns-7db6d8ff4d-xhtbn\" (UID: \"3d33b4d7-011b-40dc-9257-7b5c3419a746\") " pod="kube-system/coredns-7db6d8ff4d-xhtbn" Mar 20 21:25:14.905288 kubelet[2725]: I0320 21:25:14.905207 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d33b4d7-011b-40dc-9257-7b5c3419a746-config-volume\") pod \"coredns-7db6d8ff4d-xhtbn\" (UID: \"3d33b4d7-011b-40dc-9257-7b5c3419a746\") " pod="kube-system/coredns-7db6d8ff4d-xhtbn" Mar 20 21:25:14.905434 kubelet[2725]: I0320 21:25:14.905398 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6hz\" (UniqueName: \"kubernetes.io/projected/d6172a06-d562-425e-a828-943a1b20b0b1-kube-api-access-kq6hz\") pod \"coredns-7db6d8ff4d-csbqr\" (UID: \"d6172a06-d562-425e-a828-943a1b20b0b1\") " pod="kube-system/coredns-7db6d8ff4d-csbqr" Mar 20 21:25:15.107484 containerd[1488]: time="2025-03-20T21:25:15.107374763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-csbqr,Uid:d6172a06-d562-425e-a828-943a1b20b0b1,Namespace:kube-system,Attempt:0,}" Mar 20 21:25:15.113046 containerd[1488]: time="2025-03-20T21:25:15.113010513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhtbn,Uid:3d33b4d7-011b-40dc-9257-7b5c3419a746,Namespace:kube-system,Attempt:0,}" Mar 20 21:25:15.533859 kubelet[2725]: I0320 21:25:15.533777 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-njkwv" podStartSLOduration=5.913035244 podStartE2EDuration="14.533761339s" podCreationTimestamp="2025-03-20 21:25:01 +0000 UTC" firstStartedPulling="2025-03-20 21:25:01.830171911 +0000 UTC m=+16.486368190" lastFinishedPulling="2025-03-20 21:25:10.450898006 +0000 UTC m=+25.107094285" observedRunningTime="2025-03-20 21:25:15.533326093 +0000 UTC m=+30.189522372" watchObservedRunningTime="2025-03-20 21:25:15.533761339 +0000 UTC m=+30.189957618" Mar 20 21:25:16.792662 systemd-networkd[1402]: cilium_host: Link UP Mar 20 21:25:16.792820 systemd-networkd[1402]: cilium_net: Link UP Mar 20 21:25:16.793526 systemd-networkd[1402]: cilium_net: Gained carrier Mar 20 21:25:16.793920 systemd-networkd[1402]: cilium_host: Gained carrier Mar 20 21:25:16.794207 systemd-networkd[1402]: cilium_net: Gained IPv6LL Mar 20 21:25:16.794487 systemd-networkd[1402]: cilium_host: Gained IPv6LL Mar 20 21:25:16.881930 systemd-networkd[1402]: cilium_vxlan: Link UP Mar 20 21:25:16.881938 systemd-networkd[1402]: cilium_vxlan: Gained carrier Mar 20 21:25:17.172773 kernel: NET: Registered PF_ALG protocol family Mar 20 21:25:17.344198 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:55120.service - OpenSSH per-connection server daemon (10.0.0.1:55120). Mar 20 21:25:17.400468 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 55120 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:17.401353 sshd-session[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:17.408792 systemd-logind[1468]: New session 9 of user core. Mar 20 21:25:17.419267 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 21:25:17.557183 sshd[3656]: Connection closed by 10.0.0.1 port 55120 Mar 20 21:25:17.557436 sshd-session[3637]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:17.560733 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:55120.service: Deactivated successfully. Mar 20 21:25:17.562803 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 21:25:17.564900 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Mar 20 21:25:17.566109 systemd-logind[1468]: Removed session 9. Mar 20 21:25:17.804916 systemd-networkd[1402]: lxc_health: Link UP Mar 20 21:25:17.805151 systemd-networkd[1402]: lxc_health: Gained carrier Mar 20 21:25:18.209213 systemd-networkd[1402]: lxc0b013a357501: Link UP Mar 20 21:25:18.210689 kernel: eth0: renamed from tmpe5d4e Mar 20 21:25:18.226219 systemd-networkd[1402]: lxc231fe2b9b3ab: Link UP Mar 20 21:25:18.229300 systemd-networkd[1402]: lxc0b013a357501: Gained carrier Mar 20 21:25:18.229749 kernel: eth0: renamed from tmp65496 Mar 20 21:25:18.238542 systemd-networkd[1402]: lxc231fe2b9b3ab: Gained carrier Mar 20 21:25:18.672907 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Mar 20 21:25:19.505139 systemd-networkd[1402]: lxc_health: Gained IPv6LL Mar 20 21:25:20.017065 systemd-networkd[1402]: lxc231fe2b9b3ab: Gained IPv6LL Mar 20 21:25:20.273100 systemd-networkd[1402]: lxc0b013a357501: Gained IPv6LL Mar 20 21:25:21.727754 containerd[1488]: time="2025-03-20T21:25:21.727701137Z" level=info msg="connecting to shim e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee" address="unix:///run/containerd/s/f419c468af36f0cca701d4d227fbe4d2a87a1b347c1b0b7f8b7b9323345c971a" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:25:21.730603 containerd[1488]: time="2025-03-20T21:25:21.730197553Z" level=info msg="connecting to shim 6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9" address="unix:///run/containerd/s/646b2e254cc1b106bddbafaf7f4cc7465368ad17aead1dcf6c5cf4c1020d835c" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:25:21.761878 systemd[1]: Started cri-containerd-e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee.scope - libcontainer container e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee. Mar 20 21:25:21.765226 systemd[1]: Started cri-containerd-6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9.scope - libcontainer container 6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9. Mar 20 21:25:21.773553 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:25:21.787605 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 21:25:21.798480 containerd[1488]: time="2025-03-20T21:25:21.798446816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-csbqr,Uid:d6172a06-d562-425e-a828-943a1b20b0b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee\"" Mar 20 21:25:21.801916 containerd[1488]: time="2025-03-20T21:25:21.801428434Z" level=info msg="CreateContainer within sandbox \"e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:25:21.810208 containerd[1488]: time="2025-03-20T21:25:21.809958371Z" level=info msg="Container 8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:21.814728 containerd[1488]: time="2025-03-20T21:25:21.814690581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xhtbn,Uid:3d33b4d7-011b-40dc-9257-7b5c3419a746,Namespace:kube-system,Attempt:0,} returns sandbox id \"6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9\"" Mar 20 21:25:21.817688 containerd[1488]: time="2025-03-20T21:25:21.817640276Z" level=info msg="CreateContainer within sandbox \"e5d4e32240521a8b06a65075c983489cd114ce64e5730e9814a9802d31dba5ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861\"" Mar 20 21:25:21.818604 containerd[1488]: time="2025-03-20T21:25:21.818399541Z" level=info msg="StartContainer for \"8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861\"" Mar 20 21:25:21.818604 containerd[1488]: time="2025-03-20T21:25:21.818417703Z" level=info msg="CreateContainer within sandbox \"6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 21:25:21.819849 containerd[1488]: time="2025-03-20T21:25:21.819818904Z" level=info msg="connecting to shim 8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861" address="unix:///run/containerd/s/f419c468af36f0cca701d4d227fbe4d2a87a1b347c1b0b7f8b7b9323345c971a" protocol=ttrpc version=3 Mar 20 21:25:21.830690 containerd[1488]: time="2025-03-20T21:25:21.830645440Z" level=info msg="Container 248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:25:21.840847 systemd[1]: Started cri-containerd-8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861.scope - libcontainer container 8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861. Mar 20 21:25:21.842705 containerd[1488]: time="2025-03-20T21:25:21.842658399Z" level=info msg="CreateContainer within sandbox \"6549657ff3b5cd313c941bd8decd5b5db1e5f2822d857ec2b3a4fcff6dbabee9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138\"" Mar 20 21:25:21.843635 containerd[1488]: time="2025-03-20T21:25:21.843237170Z" level=info msg="StartContainer for \"248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138\"" Mar 20 21:25:21.844097 containerd[1488]: time="2025-03-20T21:25:21.844068841Z" level=info msg="connecting to shim 248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138" address="unix:///run/containerd/s/646b2e254cc1b106bddbafaf7f4cc7465368ad17aead1dcf6c5cf4c1020d835c" protocol=ttrpc version=3 Mar 20 21:25:21.865485 systemd[1]: Started cri-containerd-248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138.scope - libcontainer container 248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138. Mar 20 21:25:21.868860 containerd[1488]: time="2025-03-20T21:25:21.868816982Z" level=info msg="StartContainer for \"8620eb35fc6a0335d396b4164e8585b3efc6ddd0b7c5264a8b47e82b28423861\" returns successfully" Mar 20 21:25:21.902825 containerd[1488]: time="2025-03-20T21:25:21.902775239Z" level=info msg="StartContainer for \"248683eb3b088736e82d7ac1ccd5bb6c8458fe939cbe1d8b184248dce8455138\" returns successfully" Mar 20 21:25:22.543616 kubelet[2725]: I0320 21:25:22.543442 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-csbqr" podStartSLOduration=21.543423704 podStartE2EDuration="21.543423704s" podCreationTimestamp="2025-03-20 21:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:25:22.542355974 +0000 UTC m=+37.198552253" watchObservedRunningTime="2025-03-20 21:25:22.543423704 +0000 UTC m=+37.199619943" Mar 20 21:25:22.557362 kubelet[2725]: I0320 21:25:22.557300 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xhtbn" podStartSLOduration=21.557286465 podStartE2EDuration="21.557286465s" podCreationTimestamp="2025-03-20 21:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:25:22.557030084 +0000 UTC m=+37.213226363" watchObservedRunningTime="2025-03-20 21:25:22.557286465 +0000 UTC m=+37.213482744" Mar 20 21:25:22.572736 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:39952.service - OpenSSH per-connection server daemon (10.0.0.1:39952). Mar 20 21:25:22.624339 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 39952 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:22.625246 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:22.629301 systemd-logind[1468]: New session 10 of user core. Mar 20 21:25:22.636859 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 21:25:22.751167 sshd[4091]: Connection closed by 10.0.0.1 port 39952 Mar 20 21:25:22.751805 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:22.755283 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:39952.service: Deactivated successfully. Mar 20 21:25:22.757110 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 21:25:22.757942 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Mar 20 21:25:22.758982 systemd-logind[1468]: Removed session 10. Mar 20 21:25:27.766185 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:39962.service - OpenSSH per-connection server daemon (10.0.0.1:39962). Mar 20 21:25:27.814684 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:27.815240 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:27.819664 systemd-logind[1468]: New session 11 of user core. Mar 20 21:25:27.830832 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 21:25:27.974489 sshd[4111]: Connection closed by 10.0.0.1 port 39962 Mar 20 21:25:27.975423 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:27.987097 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:39962.service: Deactivated successfully. Mar 20 21:25:27.988714 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 21:25:27.990564 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Mar 20 21:25:27.992188 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:39964.service - OpenSSH per-connection server daemon (10.0.0.1:39964). Mar 20 21:25:27.992891 systemd-logind[1468]: Removed session 11. Mar 20 21:25:28.047416 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 39964 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:28.048775 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:28.052899 systemd-logind[1468]: New session 12 of user core. Mar 20 21:25:28.067860 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 21:25:28.222994 sshd[4128]: Connection closed by 10.0.0.1 port 39964 Mar 20 21:25:28.223899 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:28.241557 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:39964.service: Deactivated successfully. Mar 20 21:25:28.248939 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 21:25:28.250895 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Mar 20 21:25:28.254261 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Mar 20 21:25:28.254992 systemd-logind[1468]: Removed session 12. Mar 20 21:25:28.304803 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:28.306050 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:28.310286 systemd-logind[1468]: New session 13 of user core. Mar 20 21:25:28.324871 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 21:25:28.441952 sshd[4142]: Connection closed by 10.0.0.1 port 39980 Mar 20 21:25:28.442287 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:28.445487 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:39980.service: Deactivated successfully. Mar 20 21:25:28.448245 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 21:25:28.449518 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Mar 20 21:25:28.451146 systemd-logind[1468]: Removed session 13. Mar 20 21:25:33.456193 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:50952.service - OpenSSH per-connection server daemon (10.0.0.1:50952). Mar 20 21:25:33.508490 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 50952 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:33.509759 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:33.513744 systemd-logind[1468]: New session 14 of user core. Mar 20 21:25:33.523882 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 21:25:33.630054 sshd[4161]: Connection closed by 10.0.0.1 port 50952 Mar 20 21:25:33.630402 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:33.633024 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:50952.service: Deactivated successfully. Mar 20 21:25:33.634656 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 21:25:33.636006 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Mar 20 21:25:33.637034 systemd-logind[1468]: Removed session 14. Mar 20 21:25:38.642062 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:50966.service - OpenSSH per-connection server daemon (10.0.0.1:50966). Mar 20 21:25:38.708788 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 50966 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:38.709915 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:38.714128 systemd-logind[1468]: New session 15 of user core. Mar 20 21:25:38.720799 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 21:25:38.835824 sshd[4177]: Connection closed by 10.0.0.1 port 50966 Mar 20 21:25:38.836569 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:38.847286 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:50966.service: Deactivated successfully. Mar 20 21:25:38.848820 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 21:25:38.850284 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Mar 20 21:25:38.852020 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:50976.service - OpenSSH per-connection server daemon (10.0.0.1:50976). Mar 20 21:25:38.853181 systemd-logind[1468]: Removed session 15. Mar 20 21:25:38.928081 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 50976 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:38.929709 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:38.934250 systemd-logind[1468]: New session 16 of user core. Mar 20 21:25:38.955878 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 21:25:39.248547 sshd[4192]: Connection closed by 10.0.0.1 port 50976 Mar 20 21:25:39.249252 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:39.258981 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:50976.service: Deactivated successfully. Mar 20 21:25:39.260487 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 21:25:39.261190 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Mar 20 21:25:39.262967 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Mar 20 21:25:39.263975 systemd-logind[1468]: Removed session 16. Mar 20 21:25:39.328120 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:39.329526 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:39.333709 systemd-logind[1468]: New session 17 of user core. Mar 20 21:25:39.345796 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 21:25:40.591726 sshd[4205]: Connection closed by 10.0.0.1 port 50990 Mar 20 21:25:40.593028 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:40.607169 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:50992.service - OpenSSH per-connection server daemon (10.0.0.1:50992). Mar 20 21:25:40.608119 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:50990.service: Deactivated successfully. Mar 20 21:25:40.609811 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 21:25:40.613772 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Mar 20 21:25:40.624955 systemd-logind[1468]: Removed session 17. Mar 20 21:25:40.658892 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 50992 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:40.660029 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:40.664296 systemd-logind[1468]: New session 18 of user core. Mar 20 21:25:40.676870 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 21:25:40.889151 sshd[4226]: Connection closed by 10.0.0.1 port 50992 Mar 20 21:25:40.889716 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:40.900440 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:50992.service: Deactivated successfully. Mar 20 21:25:40.902128 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 21:25:40.902805 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Mar 20 21:25:40.905327 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:50994.service - OpenSSH per-connection server daemon (10.0.0.1:50994). Mar 20 21:25:40.908733 systemd-logind[1468]: Removed session 18. Mar 20 21:25:40.958996 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 50994 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:40.960325 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:40.965209 systemd-logind[1468]: New session 19 of user core. Mar 20 21:25:40.973882 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 21:25:41.078888 sshd[4240]: Connection closed by 10.0.0.1 port 50994 Mar 20 21:25:41.079223 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:41.081970 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:50994.service: Deactivated successfully. Mar 20 21:25:41.083698 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 21:25:41.085063 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Mar 20 21:25:41.086102 systemd-logind[1468]: Removed session 19. Mar 20 21:25:46.099408 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:60626.service - OpenSSH per-connection server daemon (10.0.0.1:60626). Mar 20 21:25:46.146066 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 60626 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:46.147288 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:46.151262 systemd-logind[1468]: New session 20 of user core. Mar 20 21:25:46.160914 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 21:25:46.266777 sshd[4261]: Connection closed by 10.0.0.1 port 60626 Mar 20 21:25:46.267131 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:46.270484 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:60626.service: Deactivated successfully. Mar 20 21:25:46.272403 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 21:25:46.273160 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Mar 20 21:25:46.274358 systemd-logind[1468]: Removed session 20. Mar 20 21:25:51.279039 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:60636.service - OpenSSH per-connection server daemon (10.0.0.1:60636). Mar 20 21:25:51.327486 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 60636 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:51.329080 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:51.333727 systemd-logind[1468]: New session 21 of user core. Mar 20 21:25:51.346830 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 21:25:51.454750 sshd[4276]: Connection closed by 10.0.0.1 port 60636 Mar 20 21:25:51.455055 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:51.458419 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:60636.service: Deactivated successfully. Mar 20 21:25:51.460226 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 21:25:51.461096 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Mar 20 21:25:51.461862 systemd-logind[1468]: Removed session 21. Mar 20 21:25:56.465834 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:35116.service - OpenSSH per-connection server daemon (10.0.0.1:35116). Mar 20 21:25:56.520543 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 35116 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:25:56.521799 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:25:56.525908 systemd-logind[1468]: New session 22 of user core. Mar 20 21:25:56.534863 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 21:25:56.641614 sshd[4292]: Connection closed by 10.0.0.1 port 35116 Mar 20 21:25:56.642097 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Mar 20 21:25:56.645175 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:35116.service: Deactivated successfully. Mar 20 21:25:56.646734 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 21:25:56.647312 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Mar 20 21:25:56.648105 systemd-logind[1468]: Removed session 22. Mar 20 21:26:01.653930 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:35132.service - OpenSSH per-connection server daemon (10.0.0.1:35132). Mar 20 21:26:01.704544 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 35132 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:26:01.705658 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:01.709458 systemd-logind[1468]: New session 23 of user core. Mar 20 21:26:01.722799 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 21:26:01.831565 sshd[4308]: Connection closed by 10.0.0.1 port 35132 Mar 20 21:26:01.832456 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:01.841894 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:35132.service: Deactivated successfully. Mar 20 21:26:01.843374 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 21:26:01.844006 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Mar 20 21:26:01.845798 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:35136.service - OpenSSH per-connection server daemon (10.0.0.1:35136). Mar 20 21:26:01.847234 systemd-logind[1468]: Removed session 23. Mar 20 21:26:01.892519 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 35136 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:26:01.893777 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:01.897683 systemd-logind[1468]: New session 24 of user core. Mar 20 21:26:01.905828 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 21:26:04.705158 containerd[1488]: time="2025-03-20T21:26:04.705080717Z" level=info msg="StopContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" with timeout 30 (s)" Mar 20 21:26:04.711186 containerd[1488]: time="2025-03-20T21:26:04.710800804Z" level=info msg="Stop container \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" with signal terminated" Mar 20 21:26:04.719632 systemd[1]: cri-containerd-5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c.scope: Deactivated successfully. Mar 20 21:26:04.721817 containerd[1488]: time="2025-03-20T21:26:04.721715405Z" level=info msg="received exit event container_id:\"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" id:\"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" pid:3277 exited_at:{seconds:1742505964 nanos:721447839}" Mar 20 21:26:04.721817 containerd[1488]: time="2025-03-20T21:26:04.721776846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" id:\"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" pid:3277 exited_at:{seconds:1742505964 nanos:721447839}" Mar 20 21:26:04.745711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c-rootfs.mount: Deactivated successfully. Mar 20 21:26:04.747417 containerd[1488]: time="2025-03-20T21:26:04.747384292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" id:\"d755787a140843c4e27ffb44be69a3b84f7ab2de576f7ef76dee1aeefb98c089\" pid:4353 exited_at:{seconds:1742505964 nanos:747112686}" Mar 20 21:26:04.750072 containerd[1488]: time="2025-03-20T21:26:04.750040670Z" level=info msg="StopContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" with timeout 2 (s)" Mar 20 21:26:04.750460 containerd[1488]: time="2025-03-20T21:26:04.750437519Z" level=info msg="Stop container \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" with signal terminated" Mar 20 21:26:04.756458 systemd-networkd[1402]: lxc_health: Link DOWN Mar 20 21:26:04.756463 systemd-networkd[1402]: lxc_health: Lost carrier Mar 20 21:26:04.762110 containerd[1488]: time="2025-03-20T21:26:04.762071976Z" level=info msg="StopContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" returns successfully" Mar 20 21:26:04.764438 containerd[1488]: time="2025-03-20T21:26:04.764271385Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 21:26:04.764891 containerd[1488]: time="2025-03-20T21:26:04.764868638Z" level=info msg="StopPodSandbox for \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\"" Mar 20 21:26:04.765044 containerd[1488]: time="2025-03-20T21:26:04.764995761Z" level=info msg="Container to stop \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.771213 systemd[1]: cri-containerd-987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8.scope: Deactivated successfully. Mar 20 21:26:04.771561 systemd[1]: cri-containerd-987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8.scope: Consumed 6.452s CPU time, 123.3M memory peak, 144K read from disk, 12.9M written to disk. Mar 20 21:26:04.772488 systemd[1]: cri-containerd-1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7.scope: Deactivated successfully. Mar 20 21:26:04.776179 containerd[1488]: time="2025-03-20T21:26:04.776131127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" id:\"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" pid:3003 exit_status:137 exited_at:{seconds:1742505964 nanos:775587595}" Mar 20 21:26:04.783295 containerd[1488]: time="2025-03-20T21:26:04.783254684Z" level=info msg="received exit event container_id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" pid:3384 exited_at:{seconds:1742505964 nanos:782959917}" Mar 20 21:26:04.802292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8-rootfs.mount: Deactivated successfully. Mar 20 21:26:04.804946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7-rootfs.mount: Deactivated successfully. Mar 20 21:26:04.812470 containerd[1488]: time="2025-03-20T21:26:04.812271485Z" level=info msg="shim disconnected" id=1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7 namespace=k8s.io Mar 20 21:26:04.812665 containerd[1488]: time="2025-03-20T21:26:04.812447769Z" level=warning msg="cleaning up after shim disconnected" id=1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7 namespace=k8s.io Mar 20 21:26:04.812665 containerd[1488]: time="2025-03-20T21:26:04.812490210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:26:04.815962 containerd[1488]: time="2025-03-20T21:26:04.815927565Z" level=info msg="StopContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" returns successfully" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816507618Z" level=info msg="StopPodSandbox for \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\"" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816569900Z" level=info msg="Container to stop \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816582540Z" level=info msg="Container to stop \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816591260Z" level=info msg="Container to stop \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816601620Z" level=info msg="Container to stop \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.816748 containerd[1488]: time="2025-03-20T21:26:04.816610341Z" level=info msg="Container to stop \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 21:26:04.824076 systemd[1]: cri-containerd-8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa.scope: Deactivated successfully. Mar 20 21:26:04.825770 containerd[1488]: time="2025-03-20T21:26:04.825621180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" id:\"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" pid:3384 exited_at:{seconds:1742505964 nanos:782959917}" Mar 20 21:26:04.826222 containerd[1488]: time="2025-03-20T21:26:04.826133351Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" id:\"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" pid:2889 exit_status:137 exited_at:{seconds:1742505964 nanos:825840704}" Mar 20 21:26:04.826589 containerd[1488]: time="2025-03-20T21:26:04.826569120Z" level=info msg="TearDown network for sandbox \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" successfully" Mar 20 21:26:04.826832 containerd[1488]: time="2025-03-20T21:26:04.826756765Z" level=info msg="StopPodSandbox for \"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" returns successfully" Mar 20 21:26:04.827415 containerd[1488]: time="2025-03-20T21:26:04.827344538Z" level=info msg="received exit event sandbox_id:\"1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7\" exit_status:137 exited_at:{seconds:1742505964 nanos:775587595}" Mar 20 21:26:04.827609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d9186c53a2eaf7b22c0b3dd0c7a27a17f73eec4e1988b1208162f1388dfd5b7-shm.mount: Deactivated successfully. Mar 20 21:26:04.860942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa-rootfs.mount: Deactivated successfully. Mar 20 21:26:04.866030 containerd[1488]: time="2025-03-20T21:26:04.865865948Z" level=info msg="received exit event sandbox_id:\"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" exit_status:137 exited_at:{seconds:1742505964 nanos:825840704}" Mar 20 21:26:04.866533 containerd[1488]: time="2025-03-20T21:26:04.866410560Z" level=info msg="TearDown network for sandbox \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" successfully" Mar 20 21:26:04.866533 containerd[1488]: time="2025-03-20T21:26:04.866460041Z" level=info msg="StopPodSandbox for \"8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa\" returns successfully" Mar 20 21:26:04.867012 containerd[1488]: time="2025-03-20T21:26:04.866604005Z" level=info msg="shim disconnected" id=8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa namespace=k8s.io Mar 20 21:26:04.867012 containerd[1488]: time="2025-03-20T21:26:04.866625965Z" level=warning msg="cleaning up after shim disconnected" id=8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa namespace=k8s.io Mar 20 21:26:04.867012 containerd[1488]: time="2025-03-20T21:26:04.866653286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 21:26:04.997496 kubelet[2725]: I0320 21:26:04.997382 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abd4b00c-7c04-4be1-ba49-bd313c3e13de-cilium-config-path\") pod \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\" (UID: \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997557 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-etc-cni-netd\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997583 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-clustermesh-secrets\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997598 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-run\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997618 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hubble-tls\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997682 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-bpf-maps\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998017 kubelet[2725]: I0320 21:26:04.997717 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-xtables-lock\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997752 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6f9z\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-kube-api-access-f6f9z\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997771 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-net\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997788 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cni-path\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997804 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-config-path\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997818 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-kernel\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998155 kubelet[2725]: I0320 21:26:04.997835 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hostproc\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998277 kubelet[2725]: I0320 21:26:04.997849 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-cgroup\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998277 kubelet[2725]: I0320 21:26:04.997863 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-lib-modules\") pod \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\" (UID: \"e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf\") " Mar 20 21:26:04.998277 kubelet[2725]: I0320 21:26:04.997882 2725 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n227p\" (UniqueName: \"kubernetes.io/projected/abd4b00c-7c04-4be1-ba49-bd313c3e13de-kube-api-access-n227p\") pod \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\" (UID: \"abd4b00c-7c04-4be1-ba49-bd313c3e13de\") " Mar 20 21:26:05.009817 kubelet[2725]: I0320 21:26:05.009268 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.009817 kubelet[2725]: I0320 21:26:05.009267 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd4b00c-7c04-4be1-ba49-bd313c3e13de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abd4b00c-7c04-4be1-ba49-bd313c3e13de" (UID: "abd4b00c-7c04-4be1-ba49-bd313c3e13de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:26:05.009817 kubelet[2725]: I0320 21:26:05.009368 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 21:26:05.009817 kubelet[2725]: I0320 21:26:05.009417 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.009817 kubelet[2725]: I0320 21:26:05.009438 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd4b00c-7c04-4be1-ba49-bd313c3e13de-kube-api-access-n227p" (OuterVolumeSpecName: "kube-api-access-n227p") pod "abd4b00c-7c04-4be1-ba49-bd313c3e13de" (UID: "abd4b00c-7c04-4be1-ba49-bd313c3e13de"). InnerVolumeSpecName "kube-api-access-n227p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:26:05.010943 kubelet[2725]: I0320 21:26:05.010840 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.010943 kubelet[2725]: I0320 21:26:05.010887 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.010943 kubelet[2725]: I0320 21:26:05.010908 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.010943 kubelet[2725]: I0320 21:26:05.010924 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.010943 kubelet[2725]: I0320 21:26:05.010939 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.011363 kubelet[2725]: I0320 21:26:05.011302 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 21:26:05.011363 kubelet[2725]: I0320 21:26:05.011343 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.012683 kubelet[2725]: I0320 21:26:05.011918 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:26:05.013142 kubelet[2725]: I0320 21:26:05.013110 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.013934 kubelet[2725]: I0320 21:26:05.013902 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-kube-api-access-f6f9z" (OuterVolumeSpecName: "kube-api-access-f6f9z") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "kube-api-access-f6f9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 21:26:05.014342 kubelet[2725]: I0320 21:26:05.014319 2725 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" (UID: "e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 21:26:05.098113 kubelet[2725]: I0320 21:26:05.098072 2725 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098113 kubelet[2725]: I0320 21:26:05.098107 2725 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abd4b00c-7c04-4be1-ba49-bd313c3e13de-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098113 kubelet[2725]: I0320 21:26:05.098118 2725 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098113 kubelet[2725]: I0320 21:26:05.098126 2725 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098134 2725 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098143 2725 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f6f9z\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-kube-api-access-f6f9z\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098151 2725 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098158 2725 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098166 2725 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098175 2725 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098183 2725 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098318 kubelet[2725]: I0320 21:26:05.098190 2725 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098476 kubelet[2725]: I0320 21:26:05.098197 2725 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098476 kubelet[2725]: I0320 21:26:05.098206 2725 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098476 kubelet[2725]: I0320 21:26:05.098215 2725 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.098476 kubelet[2725]: I0320 21:26:05.098222 2725 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-n227p\" (UniqueName: \"kubernetes.io/projected/abd4b00c-7c04-4be1-ba49-bd313c3e13de-kube-api-access-n227p\") on node \"localhost\" DevicePath \"\"" Mar 20 21:26:05.433926 systemd[1]: Removed slice kubepods-burstable-pode1919ba0_6ff3_4fb9_8926_5d58a4ed86bf.slice - libcontainer container kubepods-burstable-pode1919ba0_6ff3_4fb9_8926_5d58a4ed86bf.slice. Mar 20 21:26:05.434040 systemd[1]: kubepods-burstable-pode1919ba0_6ff3_4fb9_8926_5d58a4ed86bf.slice: Consumed 6.596s CPU time, 123.6M memory peak, 164K read from disk, 12.9M written to disk. Mar 20 21:26:05.435432 systemd[1]: Removed slice kubepods-besteffort-podabd4b00c_7c04_4be1_ba49_bd313c3e13de.slice - libcontainer container kubepods-besteffort-podabd4b00c_7c04_4be1_ba49_bd313c3e13de.slice. Mar 20 21:26:05.470026 kubelet[2725]: E0320 21:26:05.469998 2725 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:26:05.610838 kubelet[2725]: I0320 21:26:05.610822 2725 scope.go:117] "RemoveContainer" containerID="5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c" Mar 20 21:26:05.613504 containerd[1488]: time="2025-03-20T21:26:05.613470437Z" level=info msg="RemoveContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\"" Mar 20 21:26:05.620140 containerd[1488]: time="2025-03-20T21:26:05.620103979Z" level=info msg="RemoveContainer for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" returns successfully" Mar 20 21:26:05.620427 kubelet[2725]: I0320 21:26:05.620342 2725 scope.go:117] "RemoveContainer" containerID="5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c" Mar 20 21:26:05.620600 containerd[1488]: time="2025-03-20T21:26:05.620569989Z" level=error msg="ContainerStatus for \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\": not found" Mar 20 21:26:05.621719 kubelet[2725]: E0320 21:26:05.621690 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\": not found" containerID="5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c" Mar 20 21:26:05.621858 kubelet[2725]: I0320 21:26:05.621783 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c"} err="failed to get container status \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5373a1f4bd0dfcc4b773835378494b175f964df0f91916da6012f99f318abd7c\": not found" Mar 20 21:26:05.622005 kubelet[2725]: I0320 21:26:05.621915 2725 scope.go:117] "RemoveContainer" containerID="987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8" Mar 20 21:26:05.624256 containerd[1488]: time="2025-03-20T21:26:05.624223347Z" level=info msg="RemoveContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\"" Mar 20 21:26:05.629938 containerd[1488]: time="2025-03-20T21:26:05.629858627Z" level=info msg="RemoveContainer for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" returns successfully" Mar 20 21:26:05.630176 kubelet[2725]: I0320 21:26:05.630098 2725 scope.go:117] "RemoveContainer" containerID="bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c" Mar 20 21:26:05.632700 containerd[1488]: time="2025-03-20T21:26:05.632637007Z" level=info msg="RemoveContainer for \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\"" Mar 20 21:26:05.640766 containerd[1488]: time="2025-03-20T21:26:05.638524933Z" level=info msg="RemoveContainer for \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" returns successfully" Mar 20 21:26:05.640874 kubelet[2725]: I0320 21:26:05.638802 2725 scope.go:117] "RemoveContainer" containerID="b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773" Mar 20 21:26:05.642923 containerd[1488]: time="2025-03-20T21:26:05.642891866Z" level=info msg="RemoveContainer for \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\"" Mar 20 21:26:05.646172 containerd[1488]: time="2025-03-20T21:26:05.646139736Z" level=info msg="RemoveContainer for \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" returns successfully" Mar 20 21:26:05.646343 kubelet[2725]: I0320 21:26:05.646309 2725 scope.go:117] "RemoveContainer" containerID="c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457" Mar 20 21:26:05.647914 containerd[1488]: time="2025-03-20T21:26:05.647824452Z" level=info msg="RemoveContainer for \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\"" Mar 20 21:26:05.650845 containerd[1488]: time="2025-03-20T21:26:05.650810036Z" level=info msg="RemoveContainer for \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" returns successfully" Mar 20 21:26:05.650997 kubelet[2725]: I0320 21:26:05.650949 2725 scope.go:117] "RemoveContainer" containerID="c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6" Mar 20 21:26:05.652396 containerd[1488]: time="2025-03-20T21:26:05.652373269Z" level=info msg="RemoveContainer for \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\"" Mar 20 21:26:05.655080 containerd[1488]: time="2025-03-20T21:26:05.655053446Z" level=info msg="RemoveContainer for \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" returns successfully" Mar 20 21:26:05.655217 kubelet[2725]: I0320 21:26:05.655197 2725 scope.go:117] "RemoveContainer" containerID="987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8" Mar 20 21:26:05.655410 containerd[1488]: time="2025-03-20T21:26:05.655381773Z" level=error msg="ContainerStatus for \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\": not found" Mar 20 21:26:05.655514 kubelet[2725]: E0320 21:26:05.655492 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\": not found" containerID="987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8" Mar 20 21:26:05.655546 kubelet[2725]: I0320 21:26:05.655520 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8"} err="failed to get container status \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"987b64d8b27befd2466904b6128ca7b33e72a673f402d80de037eb05f2215ac8\": not found" Mar 20 21:26:05.655578 kubelet[2725]: I0320 21:26:05.655545 2725 scope.go:117] "RemoveContainer" containerID="bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c" Mar 20 21:26:05.655715 containerd[1488]: time="2025-03-20T21:26:05.655687860Z" level=error msg="ContainerStatus for \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\": not found" Mar 20 21:26:05.655821 kubelet[2725]: E0320 21:26:05.655801 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\": not found" containerID="bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c" Mar 20 21:26:05.655855 kubelet[2725]: I0320 21:26:05.655829 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c"} err="failed to get container status \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb5ce206b46c99d87990f72beda17676cd041b9e23e207cd286b0a84b468d48c\": not found" Mar 20 21:26:05.655855 kubelet[2725]: I0320 21:26:05.655849 2725 scope.go:117] "RemoveContainer" containerID="b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773" Mar 20 21:26:05.656109 containerd[1488]: time="2025-03-20T21:26:05.656082988Z" level=error msg="ContainerStatus for \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\": not found" Mar 20 21:26:05.656196 kubelet[2725]: E0320 21:26:05.656178 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\": not found" containerID="b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773" Mar 20 21:26:05.656231 kubelet[2725]: I0320 21:26:05.656200 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773"} err="failed to get container status \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3cf9cbf9fb1f2c35dbc514a3a5ca243a353c6c15129924086f703da021eb773\": not found" Mar 20 21:26:05.656231 kubelet[2725]: I0320 21:26:05.656213 2725 scope.go:117] "RemoveContainer" containerID="c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457" Mar 20 21:26:05.656339 containerd[1488]: time="2025-03-20T21:26:05.656317633Z" level=error msg="ContainerStatus for \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\": not found" Mar 20 21:26:05.656420 kubelet[2725]: E0320 21:26:05.656399 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\": not found" containerID="c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457" Mar 20 21:26:05.656481 kubelet[2725]: I0320 21:26:05.656423 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457"} err="failed to get container status \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\": rpc error: code = NotFound desc = an error occurred when try to find container \"c76acf8718dbfa7d0bedcc1ec5b17e06a7d9dfeeb007edef250e42b9aa597457\": not found" Mar 20 21:26:05.656513 kubelet[2725]: I0320 21:26:05.656490 2725 scope.go:117] "RemoveContainer" containerID="c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6" Mar 20 21:26:05.656663 containerd[1488]: time="2025-03-20T21:26:05.656643160Z" level=error msg="ContainerStatus for \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\": not found" Mar 20 21:26:05.656768 kubelet[2725]: E0320 21:26:05.656752 2725 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\": not found" containerID="c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6" Mar 20 21:26:05.656801 kubelet[2725]: I0320 21:26:05.656769 2725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6"} err="failed to get container status \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1a1216c242833762bc6c140ad838b961e93f567e1243b8746fdd47cfcc8e9b6\": not found" Mar 20 21:26:05.745533 systemd[1]: var-lib-kubelet-pods-abd4b00c\x2d7c04\x2d4be1\x2dba49\x2dbd313c3e13de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn227p.mount: Deactivated successfully. Mar 20 21:26:05.745645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c8ec08ff2e2237cd8dc87c524f370b6b43985fab2471eff5e784268a3ddbfaa-shm.mount: Deactivated successfully. Mar 20 21:26:05.745719 systemd[1]: var-lib-kubelet-pods-e1919ba0\x2d6ff3\x2d4fb9\x2d8926\x2d5d58a4ed86bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6f9z.mount: Deactivated successfully. Mar 20 21:26:05.745774 systemd[1]: var-lib-kubelet-pods-e1919ba0\x2d6ff3\x2d4fb9\x2d8926\x2d5d58a4ed86bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 21:26:05.745835 systemd[1]: var-lib-kubelet-pods-e1919ba0\x2d6ff3\x2d4fb9\x2d8926\x2d5d58a4ed86bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 21:26:06.672811 sshd[4323]: Connection closed by 10.0.0.1 port 35136 Mar 20 21:26:06.673187 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:06.683875 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:35136.service: Deactivated successfully. Mar 20 21:26:06.685408 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 21:26:06.685623 systemd[1]: session-24.scope: Consumed 2.145s CPU time, 25.8M memory peak. Mar 20 21:26:06.686655 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Mar 20 21:26:06.688073 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:47266.service - OpenSSH per-connection server daemon (10.0.0.1:47266). Mar 20 21:26:06.689435 systemd-logind[1468]: Removed session 24. Mar 20 21:26:06.740572 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 47266 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:26:06.741761 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:06.745478 systemd-logind[1468]: New session 25 of user core. Mar 20 21:26:06.755873 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 21:26:07.057489 kubelet[2725]: I0320 21:26:07.057300 2725 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T21:26:07Z","lastTransitionTime":"2025-03-20T21:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 21:26:07.428528 kubelet[2725]: I0320 21:26:07.428420 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd4b00c-7c04-4be1-ba49-bd313c3e13de" path="/var/lib/kubelet/pods/abd4b00c-7c04-4be1-ba49-bd313c3e13de/volumes" Mar 20 21:26:07.429230 kubelet[2725]: I0320 21:26:07.429210 2725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" path="/var/lib/kubelet/pods/e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf/volumes" Mar 20 21:26:07.820246 sshd[4486]: Connection closed by 10.0.0.1 port 47266 Mar 20 21:26:07.820888 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:07.834594 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:47266.service: Deactivated successfully. Mar 20 21:26:07.837413 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 21:26:07.838229 kubelet[2725]: I0320 21:26:07.837397 2725 topology_manager.go:215] "Topology Admit Handler" podUID="b49957f9-a8f5-4de0-bbc6-6543ad46bb4b" podNamespace="kube-system" podName="cilium-64tvw" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837519 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abd4b00c-7c04-4be1-ba49-bd313c3e13de" containerName="cilium-operator" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837528 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="mount-bpf-fs" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837537 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="clean-cilium-state" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837543 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="cilium-agent" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837550 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="mount-cgroup" Mar 20 21:26:07.838229 kubelet[2725]: E0320 21:26:07.837555 2725 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="apply-sysctl-overwrites" Mar 20 21:26:07.838229 kubelet[2725]: I0320 21:26:07.837579 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd4b00c-7c04-4be1-ba49-bd313c3e13de" containerName="cilium-operator" Mar 20 21:26:07.838229 kubelet[2725]: I0320 21:26:07.837585 2725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1919ba0-6ff3-4fb9-8926-5d58a4ed86bf" containerName="cilium-agent" Mar 20 21:26:07.840629 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Mar 20 21:26:07.847457 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:47278.service - OpenSSH per-connection server daemon (10.0.0.1:47278). Mar 20 21:26:07.850210 systemd-logind[1468]: Removed session 25. Mar 20 21:26:07.856336 systemd[1]: Created slice kubepods-burstable-podb49957f9_a8f5_4de0_bbc6_6543ad46bb4b.slice - libcontainer container kubepods-burstable-podb49957f9_a8f5_4de0_bbc6_6543ad46bb4b.slice. Mar 20 21:26:07.903660 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 47278 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:26:07.904878 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:07.909273 systemd-logind[1468]: New session 26 of user core. Mar 20 21:26:07.916845 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 21:26:07.966165 sshd[4500]: Connection closed by 10.0.0.1 port 47278 Mar 20 21:26:07.966639 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:07.980012 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:47278.service: Deactivated successfully. Mar 20 21:26:07.982425 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 21:26:07.984534 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Mar 20 21:26:07.986069 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:47286.service - OpenSSH per-connection server daemon (10.0.0.1:47286). Mar 20 21:26:07.987360 systemd-logind[1468]: Removed session 26. Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012169 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nr6\" (UniqueName: \"kubernetes.io/projected/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-kube-api-access-q6nr6\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012212 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-hostproc\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012231 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-clustermesh-secrets\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012246 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-cilium-config-path\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012260 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-hubble-tls\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012479 kubelet[2725]: I0320 21:26:08.012276 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-cilium-cgroup\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012289 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-lib-modules\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012305 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-etc-cni-netd\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012320 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-bpf-maps\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012334 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-cni-path\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012348 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-xtables-lock\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012721 kubelet[2725]: I0320 21:26:08.012362 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-host-proc-sys-kernel\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012841 kubelet[2725]: I0320 21:26:08.012378 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-host-proc-sys-net\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012841 kubelet[2725]: I0320 21:26:08.012393 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-cilium-run\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.012841 kubelet[2725]: I0320 21:26:08.012409 2725 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b49957f9-a8f5-4de0-bbc6-6543ad46bb4b-cilium-ipsec-secrets\") pod \"cilium-64tvw\" (UID: \"b49957f9-a8f5-4de0-bbc6-6543ad46bb4b\") " pod="kube-system/cilium-64tvw" Mar 20 21:26:08.033131 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 47286 ssh2: RSA SHA256:X6VVi2zGwQT4vFw/VBKa9j3CAPR/1+qaKaiwBaTCF1Y Mar 20 21:26:08.034420 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 21:26:08.040476 systemd-logind[1468]: New session 27 of user core. Mar 20 21:26:08.051846 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 21:26:08.166748 containerd[1488]: time="2025-03-20T21:26:08.166265908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64tvw,Uid:b49957f9-a8f5-4de0-bbc6-6543ad46bb4b,Namespace:kube-system,Attempt:0,}" Mar 20 21:26:08.181695 containerd[1488]: time="2025-03-20T21:26:08.180968074Z" level=info msg="connecting to shim 8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" namespace=k8s.io protocol=ttrpc version=3 Mar 20 21:26:08.199813 systemd[1]: Started cri-containerd-8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7.scope - libcontainer container 8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7. Mar 20 21:26:08.221134 containerd[1488]: time="2025-03-20T21:26:08.221089695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64tvw,Uid:b49957f9-a8f5-4de0-bbc6-6543ad46bb4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\"" Mar 20 21:26:08.223434 containerd[1488]: time="2025-03-20T21:26:08.223408540Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 21:26:08.228872 containerd[1488]: time="2025-03-20T21:26:08.228836205Z" level=info msg="Container 5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:08.236693 containerd[1488]: time="2025-03-20T21:26:08.234582917Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\"" Mar 20 21:26:08.237013 containerd[1488]: time="2025-03-20T21:26:08.236981284Z" level=info msg="StartContainer for \"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\"" Mar 20 21:26:08.238124 containerd[1488]: time="2025-03-20T21:26:08.238093705Z" level=info msg="connecting to shim 5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" protocol=ttrpc version=3 Mar 20 21:26:08.265832 systemd[1]: Started cri-containerd-5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75.scope - libcontainer container 5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75. Mar 20 21:26:08.288938 containerd[1488]: time="2025-03-20T21:26:08.288904294Z" level=info msg="StartContainer for \"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\" returns successfully" Mar 20 21:26:08.303379 systemd[1]: cri-containerd-5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75.scope: Deactivated successfully. Mar 20 21:26:08.305875 containerd[1488]: time="2025-03-20T21:26:08.305826903Z" level=info msg="received exit event container_id:\"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\" id:\"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\" pid:4578 exited_at:{seconds:1742505968 nanos:305132209}" Mar 20 21:26:08.306486 containerd[1488]: time="2025-03-20T21:26:08.306456435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\" id:\"5565a1c764e11d352f544fbe13abc2435fac5a9015bdbd554dfe04aef0a8fc75\" pid:4578 exited_at:{seconds:1742505968 nanos:305132209}" Mar 20 21:26:08.631356 containerd[1488]: time="2025-03-20T21:26:08.631063509Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 21:26:08.641878 containerd[1488]: time="2025-03-20T21:26:08.641788118Z" level=info msg="Container ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:08.647846 containerd[1488]: time="2025-03-20T21:26:08.647749553Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\"" Mar 20 21:26:08.648387 containerd[1488]: time="2025-03-20T21:26:08.648364485Z" level=info msg="StartContainer for \"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\"" Mar 20 21:26:08.649351 containerd[1488]: time="2025-03-20T21:26:08.649328064Z" level=info msg="connecting to shim ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" protocol=ttrpc version=3 Mar 20 21:26:08.676880 systemd[1]: Started cri-containerd-ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489.scope - libcontainer container ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489. Mar 20 21:26:08.700211 containerd[1488]: time="2025-03-20T21:26:08.700176253Z" level=info msg="StartContainer for \"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\" returns successfully" Mar 20 21:26:08.711475 systemd[1]: cri-containerd-ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489.scope: Deactivated successfully. Mar 20 21:26:08.712746 containerd[1488]: time="2025-03-20T21:26:08.712711257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\" id:\"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\" pid:4623 exited_at:{seconds:1742505968 nanos:712355010}" Mar 20 21:26:08.712833 containerd[1488]: time="2025-03-20T21:26:08.712765218Z" level=info msg="received exit event container_id:\"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\" id:\"ff12c283e65eecb25ece86a34c2d8fca863a5a9e20834ed156024e546c1a2489\" pid:4623 exited_at:{seconds:1742505968 nanos:712355010}" Mar 20 21:26:09.635258 containerd[1488]: time="2025-03-20T21:26:09.635190295Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 21:26:09.667340 containerd[1488]: time="2025-03-20T21:26:09.666950534Z" level=info msg="Container 7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:09.674632 containerd[1488]: time="2025-03-20T21:26:09.674576037Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\"" Mar 20 21:26:09.675165 containerd[1488]: time="2025-03-20T21:26:09.675108367Z" level=info msg="StartContainer for \"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\"" Mar 20 21:26:09.676911 containerd[1488]: time="2025-03-20T21:26:09.676871960Z" level=info msg="connecting to shim 7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" protocol=ttrpc version=3 Mar 20 21:26:09.698134 systemd[1]: Started cri-containerd-7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528.scope - libcontainer container 7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528. Mar 20 21:26:09.735436 systemd[1]: cri-containerd-7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528.scope: Deactivated successfully. Mar 20 21:26:09.737770 containerd[1488]: time="2025-03-20T21:26:09.737465182Z" level=info msg="received exit event container_id:\"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\" id:\"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\" pid:4669 exited_at:{seconds:1742505969 nanos:737274899}" Mar 20 21:26:09.737770 containerd[1488]: time="2025-03-20T21:26:09.737653706Z" level=info msg="StartContainer for \"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\" returns successfully" Mar 20 21:26:09.738131 containerd[1488]: time="2025-03-20T21:26:09.738101834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\" id:\"7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528\" pid:4669 exited_at:{seconds:1742505969 nanos:737274899}" Mar 20 21:26:09.763504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7593b601c95a26dd2d64020fec0ec09f12e132ab1eabff1d6f332bb2740a3528-rootfs.mount: Deactivated successfully. Mar 20 21:26:10.470850 kubelet[2725]: E0320 21:26:10.470740 2725 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 21:26:10.651313 containerd[1488]: time="2025-03-20T21:26:10.650610007Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 21:26:10.668745 containerd[1488]: time="2025-03-20T21:26:10.667612157Z" level=info msg="Container 816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:10.668977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785132160.mount: Deactivated successfully. Mar 20 21:26:10.673498 containerd[1488]: time="2025-03-20T21:26:10.673460024Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\"" Mar 20 21:26:10.674018 containerd[1488]: time="2025-03-20T21:26:10.673966353Z" level=info msg="StartContainer for \"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\"" Mar 20 21:26:10.675468 containerd[1488]: time="2025-03-20T21:26:10.675312338Z" level=info msg="connecting to shim 816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" protocol=ttrpc version=3 Mar 20 21:26:10.698878 systemd[1]: Started cri-containerd-816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c.scope - libcontainer container 816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c. Mar 20 21:26:10.721197 systemd[1]: cri-containerd-816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c.scope: Deactivated successfully. Mar 20 21:26:10.722559 containerd[1488]: time="2025-03-20T21:26:10.722523599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\" id:\"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\" pid:4706 exited_at:{seconds:1742505970 nanos:722261195}" Mar 20 21:26:10.722648 containerd[1488]: time="2025-03-20T21:26:10.722623881Z" level=info msg="received exit event container_id:\"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\" id:\"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\" pid:4706 exited_at:{seconds:1742505970 nanos:722261195}" Mar 20 21:26:10.725466 containerd[1488]: time="2025-03-20T21:26:10.725398332Z" level=info msg="StartContainer for \"816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c\" returns successfully" Mar 20 21:26:10.741091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-816cca2c3054b82d1a993785d36e50168eeede1057a7331a9e49059acd0d3f9c-rootfs.mount: Deactivated successfully. Mar 20 21:26:11.645720 containerd[1488]: time="2025-03-20T21:26:11.645584002Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 21:26:11.656464 containerd[1488]: time="2025-03-20T21:26:11.656417353Z" level=info msg="Container 77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46: CDI devices from CRI Config.CDIDevices: []" Mar 20 21:26:11.664503 containerd[1488]: time="2025-03-20T21:26:11.664452975Z" level=info msg="CreateContainer within sandbox \"8dd81bbb456025e1fedce5a55b37f492e3658465e8d76aaa9a2426d872d937b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\"" Mar 20 21:26:11.665266 containerd[1488]: time="2025-03-20T21:26:11.665223549Z" level=info msg="StartContainer for \"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\"" Mar 20 21:26:11.666129 containerd[1488]: time="2025-03-20T21:26:11.666104125Z" level=info msg="connecting to shim 77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46" address="unix:///run/containerd/s/9e80d6e8550c3d8b4a24b75bd34988d0c896a2de2ce815337ba0c08e370a5bfa" protocol=ttrpc version=3 Mar 20 21:26:11.666565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013831946.mount: Deactivated successfully. Mar 20 21:26:11.689913 systemd[1]: Started cri-containerd-77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46.scope - libcontainer container 77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46. Mar 20 21:26:11.718090 containerd[1488]: time="2025-03-20T21:26:11.718054043Z" level=info msg="StartContainer for \"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" returns successfully" Mar 20 21:26:11.770710 containerd[1488]: time="2025-03-20T21:26:11.770621253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"56eac6e30eee623bfbccf0255747a53c14c26d3395284eefdeff752f6d6ca8bb\" pid:4774 exited_at:{seconds:1742505971 nanos:770338488}" Mar 20 21:26:11.969698 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 20 21:26:14.443235 containerd[1488]: time="2025-03-20T21:26:14.443190604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"1d2d1fa9f2505f81d866c58c0b6ad11a81d93a4cfee5d45da379639aaf16b239\" pid:5183 exit_status:1 exited_at:{seconds:1742505974 nanos:442824078}" Mar 20 21:26:14.783918 systemd-networkd[1402]: lxc_health: Link UP Mar 20 21:26:14.784175 systemd-networkd[1402]: lxc_health: Gained carrier Mar 20 21:26:16.184217 kubelet[2725]: I0320 21:26:16.184166 2725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-64tvw" podStartSLOduration=9.18415179 podStartE2EDuration="9.18415179s" podCreationTimestamp="2025-03-20 21:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 21:26:12.664083727 +0000 UTC m=+87.320280086" watchObservedRunningTime="2025-03-20 21:26:16.18415179 +0000 UTC m=+90.840348069" Mar 20 21:26:16.564828 containerd[1488]: time="2025-03-20T21:26:16.564714212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"ec7ee5665aff4c3786baf758c7e42706ec57fece1cbf0fac92099a01f31c697a\" pid:5312 exited_at:{seconds:1742505976 nanos:564424047}" Mar 20 21:26:16.592964 systemd-networkd[1402]: lxc_health: Gained IPv6LL Mar 20 21:26:18.659965 containerd[1488]: time="2025-03-20T21:26:18.659919455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"37772a37f5fa5b30a59cab319d750ec9f7b83cd60116352c2717655ea6b04053\" pid:5345 exited_at:{seconds:1742505978 nanos:659489807}" Mar 20 21:26:20.761354 containerd[1488]: time="2025-03-20T21:26:20.761212103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"c8f771255c14242b4b148860184dba7876e03b0fdcb43fbea1c68c93337b4fe3\" pid:5369 exited_at:{seconds:1742505980 nanos:760904250}" Mar 20 21:26:22.866462 containerd[1488]: time="2025-03-20T21:26:22.866422669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ef7c4adfeccd843cace9876673ce31d93c21c40f374c53d51b882b3f1baf46\" id:\"915ac0321cef5dd9e1f6420ff4c6db69943faf7810215edb83d53bae2e4f7005\" pid:5394 exited_at:{seconds:1742505982 nanos:866155258}" Mar 20 21:26:22.871122 sshd[4509]: Connection closed by 10.0.0.1 port 47286 Mar 20 21:26:22.871797 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Mar 20 21:26:22.875226 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:47286.service: Deactivated successfully. Mar 20 21:26:22.876763 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 21:26:22.877471 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Mar 20 21:26:22.878568 systemd-logind[1468]: Removed session 27.