Mar 20 17:44:08.903366 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 20 17:44:08.903389 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Mar 20 13:18:46 -00 2025 Mar 20 17:44:08.903399 kernel: KASLR enabled Mar 20 17:44:08.903419 kernel: efi: EFI v2.7 by EDK II Mar 20 17:44:08.903424 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 20 17:44:08.903430 kernel: random: crng init done Mar 20 17:44:08.903437 kernel: secureboot: Secure boot disabled Mar 20 17:44:08.903443 kernel: ACPI: Early table checksum verification disabled Mar 20 17:44:08.903449 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 20 17:44:08.903456 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 20 17:44:08.903462 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903467 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903473 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903479 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903486 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903494 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903500 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903506 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903511 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 20 17:44:08.903517 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 20 17:44:08.903523 kernel: NUMA: Failed to initialise from firmware Mar 20 17:44:08.903529 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 17:44:08.903535 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Mar 20 17:44:08.903541 kernel: Zone ranges: Mar 20 17:44:08.903547 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 17:44:08.903554 kernel: DMA32 empty Mar 20 17:44:08.903560 kernel: Normal empty Mar 20 17:44:08.903566 kernel: Movable zone start for each node Mar 20 17:44:08.903572 kernel: Early memory node ranges Mar 20 17:44:08.903578 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 20 17:44:08.903584 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 20 17:44:08.903590 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 20 17:44:08.903595 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 20 17:44:08.903601 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 20 17:44:08.903607 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 20 17:44:08.903613 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 20 17:44:08.903619 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 20 17:44:08.903626 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 20 17:44:08.903632 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 20 17:44:08.903638 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 20 17:44:08.903646 kernel: psci: probing for conduit method from ACPI. Mar 20 17:44:08.903653 kernel: psci: PSCIv1.1 detected in firmware. Mar 20 17:44:08.903659 kernel: psci: Using standard PSCI v0.2 function IDs Mar 20 17:44:08.903667 kernel: psci: Trusted OS migration not required Mar 20 17:44:08.903673 kernel: psci: SMC Calling Convention v1.1 Mar 20 17:44:08.903680 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 20 17:44:08.903686 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 20 17:44:08.903692 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 20 17:44:08.903699 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 20 17:44:08.903705 kernel: Detected PIPT I-cache on CPU0 Mar 20 17:44:08.903712 kernel: CPU features: detected: GIC system register CPU interface Mar 20 17:44:08.903718 kernel: CPU features: detected: Hardware dirty bit management Mar 20 17:44:08.903724 kernel: CPU features: detected: Spectre-v4 Mar 20 17:44:08.903732 kernel: CPU features: detected: Spectre-BHB Mar 20 17:44:08.903738 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 20 17:44:08.903744 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 20 17:44:08.903751 kernel: CPU features: detected: ARM erratum 1418040 Mar 20 17:44:08.903757 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 20 17:44:08.903763 kernel: alternatives: applying boot alternatives Mar 20 17:44:08.903770 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 17:44:08.903777 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 20 17:44:08.903783 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 20 17:44:08.903790 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 20 17:44:08.903796 kernel: Fallback order for Node 0: 0 Mar 20 17:44:08.903804 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 20 17:44:08.903818 kernel: Policy zone: DMA Mar 20 17:44:08.903845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 20 17:44:08.903852 kernel: software IO TLB: area num 4. Mar 20 17:44:08.903858 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 20 17:44:08.903865 kernel: Memory: 2387412K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184876K reserved, 0K cma-reserved) Mar 20 17:44:08.903871 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 20 17:44:08.903878 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 20 17:44:08.903884 kernel: rcu: RCU event tracing is enabled. Mar 20 17:44:08.903891 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 20 17:44:08.903897 kernel: Trampoline variant of Tasks RCU enabled. Mar 20 17:44:08.903904 kernel: Tracing variant of Tasks RCU enabled. Mar 20 17:44:08.903912 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 20 17:44:08.903919 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 20 17:44:08.903925 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 20 17:44:08.903931 kernel: GICv3: 256 SPIs implemented Mar 20 17:44:08.903937 kernel: GICv3: 0 Extended SPIs implemented Mar 20 17:44:08.903943 kernel: Root IRQ handler: gic_handle_irq Mar 20 17:44:08.903949 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 20 17:44:08.903956 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 20 17:44:08.903962 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 20 17:44:08.903968 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 20 17:44:08.903975 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 20 17:44:08.903983 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 20 17:44:08.903989 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 20 17:44:08.903996 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 20 17:44:08.904002 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 17:44:08.904008 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 20 17:44:08.904015 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 20 17:44:08.904021 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 20 17:44:08.904028 kernel: arm-pv: using stolen time PV Mar 20 17:44:08.904034 kernel: Console: colour dummy device 80x25 Mar 20 17:44:08.904041 kernel: ACPI: Core revision 20230628 Mar 20 17:44:08.904048 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 20 17:44:08.904056 kernel: pid_max: default: 32768 minimum: 301 Mar 20 17:44:08.904062 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 20 17:44:08.904069 kernel: landlock: Up and running. Mar 20 17:44:08.904075 kernel: SELinux: Initializing. Mar 20 17:44:08.904082 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 17:44:08.904088 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 20 17:44:08.904095 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 17:44:08.904102 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 20 17:44:08.904108 kernel: rcu: Hierarchical SRCU implementation. Mar 20 17:44:08.904116 kernel: rcu: Max phase no-delay instances is 400. Mar 20 17:44:08.904123 kernel: Platform MSI: ITS@0x8080000 domain created Mar 20 17:44:08.904129 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 20 17:44:08.904136 kernel: Remapping and enabling EFI services. Mar 20 17:44:08.904142 kernel: smp: Bringing up secondary CPUs ... Mar 20 17:44:08.904148 kernel: Detected PIPT I-cache on CPU1 Mar 20 17:44:08.904155 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 20 17:44:08.904162 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 20 17:44:08.904168 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 17:44:08.904176 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 20 17:44:08.904183 kernel: Detected PIPT I-cache on CPU2 Mar 20 17:44:08.904195 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 20 17:44:08.904209 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 20 17:44:08.904216 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 17:44:08.904222 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 20 17:44:08.904229 kernel: Detected PIPT I-cache on CPU3 Mar 20 17:44:08.904236 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 20 17:44:08.904243 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 20 17:44:08.904252 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 20 17:44:08.904259 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 20 17:44:08.904266 kernel: smp: Brought up 1 node, 4 CPUs Mar 20 17:44:08.904272 kernel: SMP: Total of 4 processors activated. Mar 20 17:44:08.904280 kernel: CPU features: detected: 32-bit EL0 Support Mar 20 17:44:08.904286 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 20 17:44:08.904293 kernel: CPU features: detected: Common not Private translations Mar 20 17:44:08.904300 kernel: CPU features: detected: CRC32 instructions Mar 20 17:44:08.904308 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 20 17:44:08.904315 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 20 17:44:08.904322 kernel: CPU features: detected: LSE atomic instructions Mar 20 17:44:08.904329 kernel: CPU features: detected: Privileged Access Never Mar 20 17:44:08.904336 kernel: CPU features: detected: RAS Extension Support Mar 20 17:44:08.904343 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 20 17:44:08.904350 kernel: CPU: All CPU(s) started at EL1 Mar 20 17:44:08.904356 kernel: alternatives: applying system-wide alternatives Mar 20 17:44:08.904363 kernel: devtmpfs: initialized Mar 20 17:44:08.904370 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 20 17:44:08.904379 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 20 17:44:08.904385 kernel: pinctrl core: initialized pinctrl subsystem Mar 20 17:44:08.904392 kernel: SMBIOS 3.0.0 present. Mar 20 17:44:08.904399 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 20 17:44:08.904406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 20 17:44:08.904413 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 20 17:44:08.904420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 20 17:44:08.904427 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 20 17:44:08.904435 kernel: audit: initializing netlink subsys (disabled) Mar 20 17:44:08.904442 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Mar 20 17:44:08.904449 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 20 17:44:08.904456 kernel: cpuidle: using governor menu Mar 20 17:44:08.904463 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 20 17:44:08.904470 kernel: ASID allocator initialised with 32768 entries Mar 20 17:44:08.904477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 20 17:44:08.904484 kernel: Serial: AMBA PL011 UART driver Mar 20 17:44:08.904491 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 20 17:44:08.904498 kernel: Modules: 0 pages in range for non-PLT usage Mar 20 17:44:08.904507 kernel: Modules: 509248 pages in range for PLT usage Mar 20 17:44:08.904514 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 20 17:44:08.904520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 20 17:44:08.904527 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 20 17:44:08.904547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 20 17:44:08.904554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 20 17:44:08.904561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 20 17:44:08.904567 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 20 17:44:08.904575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 20 17:44:08.904583 kernel: ACPI: Added _OSI(Module Device) Mar 20 17:44:08.904590 kernel: ACPI: Added _OSI(Processor Device) Mar 20 17:44:08.904597 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 20 17:44:08.904604 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 20 17:44:08.904611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 20 17:44:08.904618 kernel: ACPI: Interpreter enabled Mar 20 17:44:08.904625 kernel: ACPI: Using GIC for interrupt routing Mar 20 17:44:08.904632 kernel: ACPI: MCFG table detected, 1 entries Mar 20 17:44:08.904639 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 20 17:44:08.904647 kernel: printk: console [ttyAMA0] enabled Mar 20 17:44:08.904654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 20 17:44:08.904792 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 20 17:44:08.904903 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 20 17:44:08.904972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 20 17:44:08.905039 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 20 17:44:08.905107 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 20 17:44:08.905119 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 20 17:44:08.905126 kernel: PCI host bridge to bus 0000:00 Mar 20 17:44:08.905199 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 20 17:44:08.905259 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 20 17:44:08.905319 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 20 17:44:08.905376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 20 17:44:08.905460 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 20 17:44:08.905541 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 20 17:44:08.905616 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 20 17:44:08.905692 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 20 17:44:08.905773 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 17:44:08.905881 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 20 17:44:08.905950 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 20 17:44:08.906038 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 20 17:44:08.906105 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 20 17:44:08.906164 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 20 17:44:08.906237 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 20 17:44:08.906246 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 20 17:44:08.906253 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 20 17:44:08.906260 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 20 17:44:08.906267 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 20 17:44:08.906275 kernel: iommu: Default domain type: Translated Mar 20 17:44:08.906283 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 20 17:44:08.906290 kernel: efivars: Registered efivars operations Mar 20 17:44:08.906297 kernel: vgaarb: loaded Mar 20 17:44:08.906303 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 20 17:44:08.906310 kernel: VFS: Disk quotas dquot_6.6.0 Mar 20 17:44:08.906318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 20 17:44:08.906325 kernel: pnp: PnP ACPI init Mar 20 17:44:08.906395 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 20 17:44:08.906406 kernel: pnp: PnP ACPI: found 1 devices Mar 20 17:44:08.906413 kernel: NET: Registered PF_INET protocol family Mar 20 17:44:08.906420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 20 17:44:08.906427 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 20 17:44:08.906434 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 20 17:44:08.906441 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 20 17:44:08.906448 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 20 17:44:08.906455 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 20 17:44:08.906462 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 17:44:08.906471 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 20 17:44:08.906478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 20 17:44:08.906485 kernel: PCI: CLS 0 bytes, default 64 Mar 20 17:44:08.906492 kernel: kvm [1]: HYP mode not available Mar 20 17:44:08.906499 kernel: Initialise system trusted keyrings Mar 20 17:44:08.906505 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 20 17:44:08.906513 kernel: Key type asymmetric registered Mar 20 17:44:08.906519 kernel: Asymmetric key parser 'x509' registered Mar 20 17:44:08.906526 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 20 17:44:08.906534 kernel: io scheduler mq-deadline registered Mar 20 17:44:08.906541 kernel: io scheduler kyber registered Mar 20 17:44:08.906548 kernel: io scheduler bfq registered Mar 20 17:44:08.906555 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 20 17:44:08.906562 kernel: ACPI: button: Power Button [PWRB] Mar 20 17:44:08.906570 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 20 17:44:08.906634 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 20 17:44:08.906643 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 20 17:44:08.906650 kernel: thunder_xcv, ver 1.0 Mar 20 17:44:08.906659 kernel: thunder_bgx, ver 1.0 Mar 20 17:44:08.906665 kernel: nicpf, ver 1.0 Mar 20 17:44:08.906672 kernel: nicvf, ver 1.0 Mar 20 17:44:08.906744 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 20 17:44:08.906806 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-20T17:44:08 UTC (1742492648) Mar 20 17:44:08.906834 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 20 17:44:08.906842 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 20 17:44:08.906849 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 20 17:44:08.906860 kernel: watchdog: Hard watchdog permanently disabled Mar 20 17:44:08.906867 kernel: NET: Registered PF_INET6 protocol family Mar 20 17:44:08.906874 kernel: Segment Routing with IPv6 Mar 20 17:44:08.906881 kernel: In-situ OAM (IOAM) with IPv6 Mar 20 17:44:08.906888 kernel: NET: Registered PF_PACKET protocol family Mar 20 17:44:08.906894 kernel: Key type dns_resolver registered Mar 20 17:44:08.906901 kernel: registered taskstats version 1 Mar 20 17:44:08.906908 kernel: Loading compiled-in X.509 certificates Mar 20 17:44:08.906915 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 60ca5105dc3f344265f11c7b4aeda632cce92b3c' Mar 20 17:44:08.906924 kernel: Key type .fscrypt registered Mar 20 17:44:08.906930 kernel: Key type fscrypt-provisioning registered Mar 20 17:44:08.906938 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 20 17:44:08.906945 kernel: ima: Allocated hash algorithm: sha1 Mar 20 17:44:08.906952 kernel: ima: No architecture policies found Mar 20 17:44:08.906959 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 20 17:44:08.906966 kernel: clk: Disabling unused clocks Mar 20 17:44:08.906973 kernel: Freeing unused kernel memory: 38464K Mar 20 17:44:08.906980 kernel: Run /init as init process Mar 20 17:44:08.906988 kernel: with arguments: Mar 20 17:44:08.906995 kernel: /init Mar 20 17:44:08.907002 kernel: with environment: Mar 20 17:44:08.907009 kernel: HOME=/ Mar 20 17:44:08.907015 kernel: TERM=linux Mar 20 17:44:08.907022 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 20 17:44:08.907030 systemd[1]: Successfully made /usr/ read-only. Mar 20 17:44:08.907039 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 17:44:08.907049 systemd[1]: Detected virtualization kvm. Mar 20 17:44:08.907056 systemd[1]: Detected architecture arm64. Mar 20 17:44:08.907063 systemd[1]: Running in initrd. Mar 20 17:44:08.907070 systemd[1]: No hostname configured, using default hostname. Mar 20 17:44:08.907078 systemd[1]: Hostname set to . Mar 20 17:44:08.907085 systemd[1]: Initializing machine ID from VM UUID. Mar 20 17:44:08.907092 systemd[1]: Queued start job for default target initrd.target. Mar 20 17:44:08.907100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:44:08.907109 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:44:08.907117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 20 17:44:08.907124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 17:44:08.907132 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 20 17:44:08.907140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 20 17:44:08.907149 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 20 17:44:08.907158 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 20 17:44:08.907166 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:44:08.907173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:44:08.907181 systemd[1]: Reached target paths.target - Path Units. Mar 20 17:44:08.907188 systemd[1]: Reached target slices.target - Slice Units. Mar 20 17:44:08.907196 systemd[1]: Reached target swap.target - Swaps. Mar 20 17:44:08.907204 systemd[1]: Reached target timers.target - Timer Units. Mar 20 17:44:08.907211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 17:44:08.907219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 17:44:08.907228 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 20 17:44:08.907236 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 20 17:44:08.907243 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:44:08.907251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 17:44:08.907258 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:44:08.907266 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 17:44:08.907274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 20 17:44:08.907281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 17:44:08.907291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 20 17:44:08.907298 systemd[1]: Starting systemd-fsck-usr.service... Mar 20 17:44:08.907306 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 17:44:08.907313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 17:44:08.907321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:44:08.907328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 20 17:44:08.907336 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:44:08.907345 systemd[1]: Finished systemd-fsck-usr.service. Mar 20 17:44:08.907353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 20 17:44:08.907375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:44:08.907402 systemd-journald[236]: Collecting audit messages is disabled. Mar 20 17:44:08.907422 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 20 17:44:08.907431 systemd-journald[236]: Journal started Mar 20 17:44:08.907449 systemd-journald[236]: Runtime Journal (/run/log/journal/ad08d6a931924723b7b95a59fbd47ad7) is 5.9M, max 47.3M, 41.4M free. Mar 20 17:44:08.894897 systemd-modules-load[238]: Inserted module 'overlay' Mar 20 17:44:08.909831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:44:08.909851 kernel: Bridge firewalling registered Mar 20 17:44:08.911251 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 20 17:44:08.912908 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 17:44:08.922198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 17:44:08.923420 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 20 17:44:08.927767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:44:08.929924 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 17:44:08.936766 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 17:44:08.942639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:44:08.944087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:44:08.945347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:44:08.947515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:44:08.950599 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 20 17:44:08.953829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 17:44:08.965742 dracut-cmdline[279]: dracut-dracut-053 Mar 20 17:44:08.968140 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7e8d7de7ff8626488e956fa44b1348d7cdfde9b4a90f4fdae2fb2fe94dbb7bff Mar 20 17:44:08.990065 systemd-resolved[280]: Positive Trust Anchors: Mar 20 17:44:08.990079 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 17:44:08.990111 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 17:44:08.995075 systemd-resolved[280]: Defaulting to hostname 'linux'. Mar 20 17:44:08.996072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 17:44:08.999920 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:44:09.041850 kernel: SCSI subsystem initialized Mar 20 17:44:09.046836 kernel: Loading iSCSI transport class v2.0-870. Mar 20 17:44:09.055843 kernel: iscsi: registered transport (tcp) Mar 20 17:44:09.068891 kernel: iscsi: registered transport (qla4xxx) Mar 20 17:44:09.068941 kernel: QLogic iSCSI HBA Driver Mar 20 17:44:09.109921 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 20 17:44:09.112232 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 20 17:44:09.148923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 20 17:44:09.148979 kernel: device-mapper: uevent: version 1.0.3 Mar 20 17:44:09.150611 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 20 17:44:09.195864 kernel: raid6: neonx8 gen() 15786 MB/s Mar 20 17:44:09.212844 kernel: raid6: neonx4 gen() 15758 MB/s Mar 20 17:44:09.229854 kernel: raid6: neonx2 gen() 13434 MB/s Mar 20 17:44:09.246854 kernel: raid6: neonx1 gen() 10494 MB/s Mar 20 17:44:09.263850 kernel: raid6: int64x8 gen() 6788 MB/s Mar 20 17:44:09.280844 kernel: raid6: int64x4 gen() 7344 MB/s Mar 20 17:44:09.297844 kernel: raid6: int64x2 gen() 6101 MB/s Mar 20 17:44:09.314972 kernel: raid6: int64x1 gen() 5050 MB/s Mar 20 17:44:09.314984 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Mar 20 17:44:09.332925 kernel: raid6: .... xor() 11993 MB/s, rmw enabled Mar 20 17:44:09.332937 kernel: raid6: using neon recovery algorithm Mar 20 17:44:09.338119 kernel: xor: measuring software checksum speed Mar 20 17:44:09.338132 kernel: 8regs : 21636 MB/sec Mar 20 17:44:09.338842 kernel: 32regs : 21676 MB/sec Mar 20 17:44:09.339988 kernel: arm64_neon : 23196 MB/sec Mar 20 17:44:09.339998 kernel: xor: using function: arm64_neon (23196 MB/sec) Mar 20 17:44:09.388849 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 20 17:44:09.399210 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 20 17:44:09.401853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:44:09.425287 systemd-udevd[465]: Using default interface naming scheme 'v255'. Mar 20 17:44:09.429020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:44:09.431891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 20 17:44:09.454132 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Mar 20 17:44:09.479799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 17:44:09.482046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 17:44:09.540342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:44:09.542790 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 20 17:44:09.563970 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 20 17:44:09.565531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 17:44:09.566749 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:44:09.567943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 17:44:09.570705 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 20 17:44:09.591798 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 20 17:44:09.603172 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 20 17:44:09.603278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 20 17:44:09.603289 kernel: GPT:9289727 != 19775487 Mar 20 17:44:09.603298 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 20 17:44:09.603314 kernel: GPT:9289727 != 19775487 Mar 20 17:44:09.603322 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 20 17:44:09.603331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 17:44:09.591637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 20 17:44:09.600860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 17:44:09.600966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:44:09.605064 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:44:09.606925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 17:44:09.607132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:44:09.611212 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:44:09.615419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:44:09.626842 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (510) Mar 20 17:44:09.628876 kernel: BTRFS: device fsid 7c452270-b08f-4ab0-84d1-fe3217dab188 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (513) Mar 20 17:44:09.638068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:44:09.645848 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 20 17:44:09.653386 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 20 17:44:09.660869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 17:44:09.667030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 20 17:44:09.668201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 20 17:44:09.671188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 20 17:44:09.673905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 20 17:44:09.691176 disk-uuid[554]: Primary Header is updated. Mar 20 17:44:09.691176 disk-uuid[554]: Secondary Entries is updated. Mar 20 17:44:09.691176 disk-uuid[554]: Secondary Header is updated. Mar 20 17:44:09.699852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 17:44:09.701932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:44:10.713975 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 20 17:44:10.714030 disk-uuid[559]: The operation has completed successfully. Mar 20 17:44:10.739626 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 20 17:44:10.739740 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 20 17:44:10.764498 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 20 17:44:10.778853 sh[574]: Success Mar 20 17:44:10.790841 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 20 17:44:10.819588 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 20 17:44:10.822312 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 20 17:44:10.837931 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 20 17:44:10.848815 kernel: BTRFS info (device dm-0): first mount of filesystem 7c452270-b08f-4ab0-84d1-fe3217dab188 Mar 20 17:44:10.848855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 20 17:44:10.848865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 20 17:44:10.850195 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 20 17:44:10.850219 kernel: BTRFS info (device dm-0): using free space tree Mar 20 17:44:10.855523 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 20 17:44:10.856902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 20 17:44:10.857671 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 20 17:44:10.860298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 20 17:44:10.882285 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 17:44:10.882326 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 17:44:10.882337 kernel: BTRFS info (device vda6): using free space tree Mar 20 17:44:10.885834 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 17:44:10.889854 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 17:44:10.893031 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 20 17:44:10.894879 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 20 17:44:10.958948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 17:44:10.962085 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 17:44:11.005709 ignition[668]: Ignition 2.20.0 Mar 20 17:44:11.005719 ignition[668]: Stage: fetch-offline Mar 20 17:44:11.005752 ignition[668]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:11.005760 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:11.005988 ignition[668]: parsed url from cmdline: "" Mar 20 17:44:11.005992 ignition[668]: no config URL provided Mar 20 17:44:11.005996 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Mar 20 17:44:11.006005 ignition[668]: no config at "/usr/lib/ignition/user.ign" Mar 20 17:44:11.006031 ignition[668]: op(1): [started] loading QEMU firmware config module Mar 20 17:44:11.006036 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 20 17:44:11.011692 ignition[668]: op(1): [finished] loading QEMU firmware config module Mar 20 17:44:11.016194 systemd-networkd[757]: lo: Link UP Mar 20 17:44:11.016208 systemd-networkd[757]: lo: Gained carrier Mar 20 17:44:11.017000 systemd-networkd[757]: Enumeration completed Mar 20 17:44:11.017107 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 17:44:11.017559 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 17:44:11.017563 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 17:44:11.018391 systemd-networkd[757]: eth0: Link UP Mar 20 17:44:11.018394 systemd-networkd[757]: eth0: Gained carrier Mar 20 17:44:11.018400 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 17:44:11.019402 systemd[1]: Reached target network.target - Network. Mar 20 17:44:11.031861 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 17:44:11.062038 ignition[668]: parsing config with SHA512: aab95a96a8d99a43e02390775cdb243f6cf65efa674e87d9f168c976fa914d939823e198846aa52d6b4530f4709a2d147ccbf6290d81642d23a656a65fbc4f52 Mar 20 17:44:11.067102 unknown[668]: fetched base config from "system" Mar 20 17:44:11.067112 unknown[668]: fetched user config from "qemu" Mar 20 17:44:11.067525 ignition[668]: fetch-offline: fetch-offline passed Mar 20 17:44:11.068609 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 17:44:11.067592 ignition[668]: Ignition finished successfully Mar 20 17:44:11.070363 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 20 17:44:11.071198 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 20 17:44:11.095165 ignition[770]: Ignition 2.20.0 Mar 20 17:44:11.095175 ignition[770]: Stage: kargs Mar 20 17:44:11.095322 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:11.095332 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:11.096219 ignition[770]: kargs: kargs passed Mar 20 17:44:11.099427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 20 17:44:11.096263 ignition[770]: Ignition finished successfully Mar 20 17:44:11.101837 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 20 17:44:11.120267 ignition[779]: Ignition 2.20.0 Mar 20 17:44:11.120278 ignition[779]: Stage: disks Mar 20 17:44:11.120436 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:11.123075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 20 17:44:11.120445 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:11.124216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 20 17:44:11.121319 ignition[779]: disks: disks passed Mar 20 17:44:11.125873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 20 17:44:11.121363 ignition[779]: Ignition finished successfully Mar 20 17:44:11.127874 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 17:44:11.129729 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 17:44:11.131196 systemd[1]: Reached target basic.target - Basic System. Mar 20 17:44:11.133724 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 20 17:44:11.163240 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 20 17:44:11.212112 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 20 17:44:11.215258 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 20 17:44:11.269839 kernel: EXT4-fs (vda9): mounted filesystem b7437caf-1938-4bc6-8e3f-9394bb7ad561 r/w with ordered data mode. Quota mode: none. Mar 20 17:44:11.270239 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 20 17:44:11.271457 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 20 17:44:11.273925 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 17:44:11.275675 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 20 17:44:11.276689 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 20 17:44:11.276729 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 20 17:44:11.276753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 17:44:11.289886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 20 17:44:11.294694 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 20 17:44:11.300143 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (798) Mar 20 17:44:11.300167 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 17:44:11.300186 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 17:44:11.300196 kernel: BTRFS info (device vda6): using free space tree Mar 20 17:44:11.300205 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 17:44:11.302939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 17:44:11.342428 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Mar 20 17:44:11.346868 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Mar 20 17:44:11.349708 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Mar 20 17:44:11.353475 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Mar 20 17:44:11.419692 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 20 17:44:11.422017 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 20 17:44:11.423550 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 20 17:44:11.438844 kernel: BTRFS info (device vda6): last unmount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 17:44:11.457027 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 20 17:44:11.478584 ignition[914]: INFO : Ignition 2.20.0 Mar 20 17:44:11.478584 ignition[914]: INFO : Stage: mount Mar 20 17:44:11.481030 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:11.481030 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:11.481030 ignition[914]: INFO : mount: mount passed Mar 20 17:44:11.481030 ignition[914]: INFO : Ignition finished successfully Mar 20 17:44:11.481415 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 20 17:44:11.483712 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 20 17:44:11.985138 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 20 17:44:11.986574 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 20 17:44:12.002792 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (925) Mar 20 17:44:12.002838 kernel: BTRFS info (device vda6): first mount of filesystem 487c8301-a281-43a7-bff1-8a4858590ad6 Mar 20 17:44:12.002849 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 20 17:44:12.004430 kernel: BTRFS info (device vda6): using free space tree Mar 20 17:44:12.006840 kernel: BTRFS info (device vda6): auto enabling async discard Mar 20 17:44:12.007839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 20 17:44:12.032379 ignition[942]: INFO : Ignition 2.20.0 Mar 20 17:44:12.032379 ignition[942]: INFO : Stage: files Mar 20 17:44:12.033976 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:12.033976 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:12.033976 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Mar 20 17:44:12.037214 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 20 17:44:12.037214 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 20 17:44:12.040701 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 20 17:44:12.042051 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 20 17:44:12.042051 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 20 17:44:12.041205 unknown[942]: wrote ssh authorized keys file for user: core Mar 20 17:44:12.045860 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 17:44:12.045860 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 20 17:44:12.105556 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 20 17:44:12.289231 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 20 17:44:12.289231 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 17:44:12.293026 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 20 17:44:12.569997 systemd-networkd[757]: eth0: Gained IPv6LL Mar 20 17:44:12.652184 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 20 17:44:12.778806 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 17:44:12.780743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 20 17:44:13.063462 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 20 17:44:13.605224 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 20 17:44:13.605224 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 20 17:44:13.608839 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 20 17:44:13.626694 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 17:44:13.630014 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 20 17:44:13.632540 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 20 17:44:13.632540 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 20 17:44:13.632540 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 20 17:44:13.632540 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 20 17:44:13.632540 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 20 17:44:13.632540 ignition[942]: INFO : files: files passed Mar 20 17:44:13.632540 ignition[942]: INFO : Ignition finished successfully Mar 20 17:44:13.632899 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 20 17:44:13.637109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 20 17:44:13.640045 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 20 17:44:13.653895 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 20 17:44:13.653978 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 20 17:44:13.657542 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Mar 20 17:44:13.658955 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:44:13.658955 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:44:13.663012 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 20 17:44:13.660084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 17:44:13.661725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 20 17:44:13.664815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 20 17:44:13.708726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 20 17:44:13.708869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 20 17:44:13.711056 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 20 17:44:13.712861 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 20 17:44:13.714664 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 20 17:44:13.715469 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 20 17:44:13.730194 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 17:44:13.732572 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 20 17:44:13.757180 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:44:13.758439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:44:13.760482 systemd[1]: Stopped target timers.target - Timer Units. Mar 20 17:44:13.762300 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 20 17:44:13.762425 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 20 17:44:13.764931 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 20 17:44:13.766935 systemd[1]: Stopped target basic.target - Basic System. Mar 20 17:44:13.768631 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 20 17:44:13.770337 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 20 17:44:13.772245 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 20 17:44:13.774176 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 20 17:44:13.775976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 20 17:44:13.777896 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 20 17:44:13.779926 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 20 17:44:13.781692 systemd[1]: Stopped target swap.target - Swaps. Mar 20 17:44:13.783224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 20 17:44:13.783347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 20 17:44:13.785614 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:44:13.786783 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:44:13.788755 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 20 17:44:13.792869 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:44:13.794098 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 20 17:44:13.794213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 20 17:44:13.797009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 20 17:44:13.797131 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 20 17:44:13.799088 systemd[1]: Stopped target paths.target - Path Units. Mar 20 17:44:13.800634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 20 17:44:13.805890 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:44:13.807160 systemd[1]: Stopped target slices.target - Slice Units. Mar 20 17:44:13.809203 systemd[1]: Stopped target sockets.target - Socket Units. Mar 20 17:44:13.810734 systemd[1]: iscsid.socket: Deactivated successfully. Mar 20 17:44:13.810840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 20 17:44:13.812368 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 20 17:44:13.812444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 20 17:44:13.813957 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 20 17:44:13.814066 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 20 17:44:13.815861 systemd[1]: ignition-files.service: Deactivated successfully. Mar 20 17:44:13.815963 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 20 17:44:13.818194 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 20 17:44:13.820039 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 20 17:44:13.820169 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:44:13.833417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 20 17:44:13.834306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 20 17:44:13.834454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:44:13.836297 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 20 17:44:13.836407 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 20 17:44:13.842876 ignition[1000]: INFO : Ignition 2.20.0 Mar 20 17:44:13.842876 ignition[1000]: INFO : Stage: umount Mar 20 17:44:13.842876 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 20 17:44:13.842876 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 20 17:44:13.843295 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 20 17:44:13.850309 ignition[1000]: INFO : umount: umount passed Mar 20 17:44:13.850309 ignition[1000]: INFO : Ignition finished successfully Mar 20 17:44:13.843380 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 20 17:44:13.846085 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 20 17:44:13.846201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 20 17:44:13.848057 systemd[1]: Stopped target network.target - Network. Mar 20 17:44:13.849512 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 20 17:44:13.849577 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 20 17:44:13.852102 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 20 17:44:13.852152 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 20 17:44:13.853687 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 20 17:44:13.853735 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 20 17:44:13.855430 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 20 17:44:13.855471 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 20 17:44:13.858493 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 20 17:44:13.860206 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 20 17:44:13.864006 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 20 17:44:13.867613 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 20 17:44:13.868499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 20 17:44:13.871808 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 20 17:44:13.872083 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 20 17:44:13.873878 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 20 17:44:13.876734 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 20 17:44:13.877413 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 20 17:44:13.877468 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:44:13.879785 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 20 17:44:13.880848 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 20 17:44:13.880915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 20 17:44:13.882855 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 17:44:13.882905 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:44:13.885653 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 20 17:44:13.885697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 20 17:44:13.887617 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 20 17:44:13.887661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:44:13.890489 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:44:13.895599 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 20 17:44:13.895662 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 20 17:44:13.911753 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 20 17:44:13.911880 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 20 17:44:13.914241 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 20 17:44:13.914363 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:44:13.916851 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 20 17:44:13.916921 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 20 17:44:13.919029 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 20 17:44:13.919067 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:44:13.920763 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 20 17:44:13.920833 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 20 17:44:13.923456 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 20 17:44:13.923505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 20 17:44:13.925980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 20 17:44:13.926028 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 20 17:44:13.929415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 20 17:44:13.930513 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 20 17:44:13.930574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:44:13.933525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 20 17:44:13.933570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:44:13.937392 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 20 17:44:13.937446 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 20 17:44:13.940015 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 20 17:44:13.940101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 20 17:44:13.942335 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 20 17:44:13.942415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 20 17:44:13.944465 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 20 17:44:13.944551 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 20 17:44:13.946306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 20 17:44:13.948390 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 20 17:44:13.967817 systemd[1]: Switching root. Mar 20 17:44:13.999903 systemd-journald[236]: Journal stopped Mar 20 17:44:14.766897 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 20 17:44:14.766957 kernel: SELinux: policy capability network_peer_controls=1 Mar 20 17:44:14.766970 kernel: SELinux: policy capability open_perms=1 Mar 20 17:44:14.766983 kernel: SELinux: policy capability extended_socket_class=1 Mar 20 17:44:14.766992 kernel: SELinux: policy capability always_check_network=0 Mar 20 17:44:14.767003 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 20 17:44:14.767012 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 20 17:44:14.767022 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 20 17:44:14.767034 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 20 17:44:14.767044 systemd[1]: Successfully loaded SELinux policy in 32.059ms. Mar 20 17:44:14.767061 kernel: audit: type=1403 audit(1742492654.160:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 20 17:44:14.767072 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.139ms. Mar 20 17:44:14.767083 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 20 17:44:14.767095 systemd[1]: Detected virtualization kvm. Mar 20 17:44:14.767105 systemd[1]: Detected architecture arm64. Mar 20 17:44:14.767116 systemd[1]: Detected first boot. Mar 20 17:44:14.767126 systemd[1]: Initializing machine ID from VM UUID. Mar 20 17:44:14.767136 kernel: NET: Registered PF_VSOCK protocol family Mar 20 17:44:14.767146 zram_generator::config[1047]: No configuration found. Mar 20 17:44:14.767157 systemd[1]: Populated /etc with preset unit settings. Mar 20 17:44:14.767167 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 20 17:44:14.767177 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 20 17:44:14.767189 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 20 17:44:14.767199 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 20 17:44:14.767209 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 20 17:44:14.767220 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 20 17:44:14.767229 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 20 17:44:14.767239 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 20 17:44:14.767249 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 20 17:44:14.767260 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 20 17:44:14.767271 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 20 17:44:14.767283 systemd[1]: Created slice user.slice - User and Session Slice. Mar 20 17:44:14.767293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 20 17:44:14.767303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 20 17:44:14.767313 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 20 17:44:14.767323 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 20 17:44:14.767334 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 20 17:44:14.767344 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 20 17:44:14.767354 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 20 17:44:14.767366 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 20 17:44:14.767377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 20 17:44:14.767390 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 20 17:44:14.767400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 20 17:44:14.767410 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 20 17:44:14.767420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 20 17:44:14.767430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 20 17:44:14.767440 systemd[1]: Reached target slices.target - Slice Units. Mar 20 17:44:14.767452 systemd[1]: Reached target swap.target - Swaps. Mar 20 17:44:14.767462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 20 17:44:14.767472 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 20 17:44:14.767483 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 20 17:44:14.767493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 20 17:44:14.767504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 20 17:44:14.767514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 20 17:44:14.767524 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 20 17:44:14.767537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 20 17:44:14.767549 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 20 17:44:14.767559 systemd[1]: Mounting media.mount - External Media Directory... Mar 20 17:44:14.767569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 20 17:44:14.767579 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 20 17:44:14.767589 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 20 17:44:14.767600 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 20 17:44:14.767610 systemd[1]: Reached target machines.target - Containers. Mar 20 17:44:14.767620 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 20 17:44:14.767632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 17:44:14.767643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 20 17:44:14.767653 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 20 17:44:14.767663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:44:14.767673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 17:44:14.767684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:44:14.767693 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 20 17:44:14.767703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:44:14.767714 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 20 17:44:14.767727 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 20 17:44:14.767737 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 20 17:44:14.767747 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 20 17:44:14.767757 systemd[1]: Stopped systemd-fsck-usr.service. Mar 20 17:44:14.767766 kernel: fuse: init (API version 7.39) Mar 20 17:44:14.767776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:44:14.767793 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 20 17:44:14.767805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 20 17:44:14.767816 kernel: ACPI: bus type drm_connector registered Mar 20 17:44:14.767833 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 20 17:44:14.767843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 20 17:44:14.767853 kernel: loop: module loaded Mar 20 17:44:14.767862 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 20 17:44:14.767872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 20 17:44:14.767882 systemd[1]: verity-setup.service: Deactivated successfully. Mar 20 17:44:14.767892 systemd[1]: Stopped verity-setup.service. Mar 20 17:44:14.767904 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 20 17:44:14.767915 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 20 17:44:14.767925 systemd[1]: Mounted media.mount - External Media Directory. Mar 20 17:44:14.767955 systemd-journald[1126]: Collecting audit messages is disabled. Mar 20 17:44:14.767980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 20 17:44:14.767995 systemd-journald[1126]: Journal started Mar 20 17:44:14.768015 systemd-journald[1126]: Runtime Journal (/run/log/journal/ad08d6a931924723b7b95a59fbd47ad7) is 5.9M, max 47.3M, 41.4M free. Mar 20 17:44:14.547137 systemd[1]: Queued start job for default target multi-user.target. Mar 20 17:44:14.557742 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 20 17:44:14.558151 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 20 17:44:14.770012 systemd[1]: Started systemd-journald.service - Journal Service. Mar 20 17:44:14.770629 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 20 17:44:14.772176 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 20 17:44:14.773479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 20 17:44:14.775002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 20 17:44:14.776682 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 20 17:44:14.776883 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 20 17:44:14.778318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:44:14.778513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:44:14.779933 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 17:44:14.780091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 17:44:14.781505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:44:14.781679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:44:14.783153 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 20 17:44:14.783310 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 20 17:44:14.784666 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:44:14.784884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:44:14.786496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 20 17:44:14.787957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 20 17:44:14.789502 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 20 17:44:14.791238 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 20 17:44:14.804392 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 20 17:44:14.806995 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 20 17:44:14.809154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 20 17:44:14.810387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 20 17:44:14.810425 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 20 17:44:14.812348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 20 17:44:14.817980 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 20 17:44:14.820285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 20 17:44:14.821490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:44:14.823121 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 20 17:44:14.825178 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 20 17:44:14.826464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 17:44:14.829662 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 20 17:44:14.830925 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 17:44:14.835296 systemd-journald[1126]: Time spent on flushing to /var/log/journal/ad08d6a931924723b7b95a59fbd47ad7 is 26.128ms for 869 entries. Mar 20 17:44:14.835296 systemd-journald[1126]: System Journal (/var/log/journal/ad08d6a931924723b7b95a59fbd47ad7) is 8M, max 195.6M, 187.6M free. Mar 20 17:44:14.883111 systemd-journald[1126]: Received client request to flush runtime journal. Mar 20 17:44:14.883166 kernel: loop0: detected capacity change from 0 to 103832 Mar 20 17:44:14.835406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:44:14.841077 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 20 17:44:14.847092 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 20 17:44:14.851861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 20 17:44:14.868111 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 20 17:44:14.872118 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 20 17:44:14.874503 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 20 17:44:14.876206 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 20 17:44:14.878920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:44:14.886393 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 20 17:44:14.889945 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 20 17:44:14.894011 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 20 17:44:14.897864 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 20 17:44:14.899540 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 20 17:44:14.910886 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 20 17:44:14.914575 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 20 17:44:14.919060 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 20 17:44:14.929599 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 20 17:44:14.931906 kernel: loop1: detected capacity change from 0 to 126448 Mar 20 17:44:14.947845 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 20 17:44:14.947867 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 20 17:44:14.953330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 20 17:44:14.978874 kernel: loop2: detected capacity change from 0 to 189592 Mar 20 17:44:15.009887 kernel: loop3: detected capacity change from 0 to 103832 Mar 20 17:44:15.014858 kernel: loop4: detected capacity change from 0 to 126448 Mar 20 17:44:15.021714 kernel: loop5: detected capacity change from 0 to 189592 Mar 20 17:44:15.025213 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 20 17:44:15.025609 (sd-merge)[1189]: Merged extensions into '/usr'. Mar 20 17:44:15.028727 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Mar 20 17:44:15.028747 systemd[1]: Reloading... Mar 20 17:44:15.081846 zram_generator::config[1214]: No configuration found. Mar 20 17:44:15.120814 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 20 17:44:15.172571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:44:15.221243 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 20 17:44:15.221664 systemd[1]: Reloading finished in 192 ms. Mar 20 17:44:15.238873 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 20 17:44:15.241854 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 20 17:44:15.255251 systemd[1]: Starting ensure-sysext.service... Mar 20 17:44:15.257098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 20 17:44:15.268979 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Mar 20 17:44:15.269003 systemd[1]: Reloading... Mar 20 17:44:15.311281 zram_generator::config[1278]: No configuration found. Mar 20 17:44:15.345979 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 20 17:44:15.346177 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 20 17:44:15.346865 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 20 17:44:15.347084 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 20 17:44:15.347133 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 20 17:44:15.350026 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 17:44:15.350035 systemd-tmpfiles[1252]: Skipping /boot Mar 20 17:44:15.358351 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 20 17:44:15.358368 systemd-tmpfiles[1252]: Skipping /boot Mar 20 17:44:15.396607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:44:15.446078 systemd[1]: Reloading finished in 176 ms. Mar 20 17:44:15.454436 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 20 17:44:15.455985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 20 17:44:15.478032 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 17:44:15.480259 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 20 17:44:15.487697 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 20 17:44:15.493035 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 20 17:44:15.495546 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 20 17:44:15.499124 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 20 17:44:15.516853 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 20 17:44:15.521054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 17:44:15.522482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:44:15.528564 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:44:15.533035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:44:15.534217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:44:15.534368 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:44:15.535923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 20 17:44:15.542051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 20 17:44:15.542393 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Mar 20 17:44:15.544709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:44:15.544941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:44:15.546574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:44:15.546724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:44:15.548666 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:44:15.548853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:44:15.550564 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 20 17:44:15.552409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 20 17:44:15.562063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 17:44:15.564641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:44:15.570028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:44:15.576579 augenrules[1368]: No rules Mar 20 17:44:15.579637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:44:15.580725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:44:15.580924 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:44:15.581057 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 17:44:15.582315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 20 17:44:15.584366 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 17:44:15.585865 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 17:44:15.587481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:44:15.587626 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:44:15.589432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:44:15.589574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:44:15.591267 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:44:15.591408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:44:15.596485 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 20 17:44:15.603392 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 20 17:44:15.607299 systemd[1]: Finished ensure-sysext.service. Mar 20 17:44:15.620905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 17:44:15.622246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 20 17:44:15.624845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 20 17:44:15.637589 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 20 17:44:15.639567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 20 17:44:15.643644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 20 17:44:15.644907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 20 17:44:15.644951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 20 17:44:15.647148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 20 17:44:15.650120 augenrules[1390]: /sbin/augenrules: No change Mar 20 17:44:15.654131 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 20 17:44:15.660278 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 20 17:44:15.664984 augenrules[1419]: No rules Mar 20 17:44:15.666846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1357) Mar 20 17:44:15.671848 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 17:44:15.673930 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 17:44:15.675655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 20 17:44:15.675859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 20 17:44:15.677363 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 20 17:44:15.677522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 20 17:44:15.679094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 20 17:44:15.679248 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 20 17:44:15.680732 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 20 17:44:15.680905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 20 17:44:15.690995 systemd-resolved[1321]: Positive Trust Anchors: Mar 20 17:44:15.691154 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 20 17:44:15.691186 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 20 17:44:15.696418 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 20 17:44:15.697485 systemd-resolved[1321]: Defaulting to hostname 'linux'. Mar 20 17:44:15.699328 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 20 17:44:15.710411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 20 17:44:15.711774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 20 17:44:15.714181 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 20 17:44:15.715348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 20 17:44:15.715416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 20 17:44:15.741330 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 20 17:44:15.763502 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 20 17:44:15.765450 systemd[1]: Reached target time-set.target - System Time Set. Mar 20 17:44:15.778610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 20 17:44:15.794162 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 20 17:44:15.794509 systemd-networkd[1410]: lo: Link UP Mar 20 17:44:15.794730 systemd-networkd[1410]: lo: Gained carrier Mar 20 17:44:15.795679 systemd-networkd[1410]: Enumeration completed Mar 20 17:44:15.796268 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 17:44:15.796352 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 20 17:44:15.797124 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 20 17:44:15.799998 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 20 17:44:15.801047 systemd-networkd[1410]: eth0: Link UP Mar 20 17:44:15.801112 systemd-networkd[1410]: eth0: Gained carrier Mar 20 17:44:15.801177 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 20 17:44:15.802740 systemd[1]: Reached target network.target - Network. Mar 20 17:44:15.805322 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 20 17:44:15.808534 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 20 17:44:15.818903 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 20 17:44:15.822130 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 20 17:44:15.828148 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 20 17:44:15.828238 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2025-03-20 17:44:15.912393 UTC. Mar 20 17:44:15.837415 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 17:44:15.839586 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 20 17:44:15.847896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 20 17:44:15.872383 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 20 17:44:15.873914 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 20 17:44:15.875017 systemd[1]: Reached target sysinit.target - System Initialization. Mar 20 17:44:15.876138 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 20 17:44:15.877353 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 20 17:44:15.878764 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 20 17:44:15.880096 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 20 17:44:15.881312 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 20 17:44:15.882526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 20 17:44:15.882561 systemd[1]: Reached target paths.target - Path Units. Mar 20 17:44:15.883472 systemd[1]: Reached target timers.target - Timer Units. Mar 20 17:44:15.885251 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 20 17:44:15.887592 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 20 17:44:15.890702 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 20 17:44:15.892179 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 20 17:44:15.893455 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 20 17:44:15.898693 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 20 17:44:15.900131 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 20 17:44:15.902381 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 20 17:44:15.904004 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 20 17:44:15.905187 systemd[1]: Reached target sockets.target - Socket Units. Mar 20 17:44:15.906164 systemd[1]: Reached target basic.target - Basic System. Mar 20 17:44:15.907162 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 20 17:44:15.907194 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 20 17:44:15.908056 systemd[1]: Starting containerd.service - containerd container runtime... Mar 20 17:44:15.909631 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 20 17:44:15.910961 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 20 17:44:15.912969 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 20 17:44:15.917952 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 20 17:44:15.919056 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 20 17:44:15.919994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 20 17:44:15.921903 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 20 17:44:15.921995 jq[1455]: false Mar 20 17:44:15.924956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 20 17:44:15.926991 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 20 17:44:15.930473 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 20 17:44:15.932476 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 20 17:44:15.932932 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 20 17:44:15.933497 systemd[1]: Starting update-engine.service - Update Engine... Mar 20 17:44:15.935815 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 20 17:44:15.939208 extend-filesystems[1456]: Found loop3 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found loop4 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found loop5 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda1 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda2 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda3 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found usr Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda4 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda6 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda7 Mar 20 17:44:15.939208 extend-filesystems[1456]: Found vda9 Mar 20 17:44:15.939208 extend-filesystems[1456]: Checking size of /dev/vda9 Mar 20 17:44:15.954754 dbus-daemon[1454]: [system] SELinux support is enabled Mar 20 17:44:15.961018 extend-filesystems[1456]: Resized partition /dev/vda9 Mar 20 17:44:15.939687 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 20 17:44:15.963192 jq[1468]: true Mar 20 17:44:15.942596 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 20 17:44:15.942768 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 20 17:44:15.944595 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 20 17:44:15.963550 jq[1478]: true Mar 20 17:44:15.944751 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 20 17:44:15.960190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 20 17:44:15.964682 systemd[1]: motdgen.service: Deactivated successfully. Mar 20 17:44:15.964923 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 20 17:44:15.968460 extend-filesystems[1479]: resize2fs 1.47.2 (1-Jan-2025) Mar 20 17:44:15.986269 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 20 17:44:15.986299 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) Mar 20 17:44:15.986311 update_engine[1465]: I20250320 17:44:15.972229 1465 main.cc:92] Flatcar Update Engine starting Mar 20 17:44:15.986311 update_engine[1465]: I20250320 17:44:15.974332 1465 update_check_scheduler.cc:74] Next update check in 8m9s Mar 20 17:44:15.974741 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 20 17:44:15.974766 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 20 17:44:15.978933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 20 17:44:15.978950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 20 17:44:15.986778 systemd[1]: Started update-engine.service - Update Engine. Mar 20 17:44:15.988517 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 20 17:44:15.994911 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 20 17:44:16.008793 tar[1475]: linux-arm64/helm Mar 20 17:44:16.013131 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 20 17:44:16.029565 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 20 17:44:16.029565 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 20 17:44:16.029565 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 20 17:44:16.033174 extend-filesystems[1456]: Resized filesystem in /dev/vda9 Mar 20 17:44:16.035440 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 20 17:44:16.035648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 20 17:44:16.050324 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Mar 20 17:44:16.054398 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Mar 20 17:44:16.054467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 20 17:44:16.055512 systemd-logind[1463]: New seat seat0. Mar 20 17:44:16.058514 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 20 17:44:16.059046 systemd[1]: Started systemd-logind.service - User Login Management. Mar 20 17:44:16.067216 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 20 17:44:16.227727 containerd[1491]: time="2025-03-20T17:44:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 20 17:44:16.230801 containerd[1491]: time="2025-03-20T17:44:16.230761417Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 20 17:44:16.243524 containerd[1491]: time="2025-03-20T17:44:16.243327055Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.635µs" Mar 20 17:44:16.243524 containerd[1491]: time="2025-03-20T17:44:16.243370925Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 20 17:44:16.243524 containerd[1491]: time="2025-03-20T17:44:16.243394047Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 20 17:44:16.243524 containerd[1491]: time="2025-03-20T17:44:16.243526984Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 20 17:44:16.243663 containerd[1491]: time="2025-03-20T17:44:16.243545038Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 20 17:44:16.243663 containerd[1491]: time="2025-03-20T17:44:16.243570532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 17:44:16.243663 containerd[1491]: time="2025-03-20T17:44:16.243618986Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 20 17:44:16.243663 containerd[1491]: time="2025-03-20T17:44:16.243630084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 17:44:16.243989 containerd[1491]: time="2025-03-20T17:44:16.243966327Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 20 17:44:16.243989 containerd[1491]: time="2025-03-20T17:44:16.243986674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244040 containerd[1491]: time="2025-03-20T17:44:16.243998415Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244040 containerd[1491]: time="2025-03-20T17:44:16.244006699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244105 containerd[1491]: time="2025-03-20T17:44:16.244090257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244298 containerd[1491]: time="2025-03-20T17:44:16.244275347Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244329 containerd[1491]: time="2025-03-20T17:44:16.244311054Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 20 17:44:16.244329 containerd[1491]: time="2025-03-20T17:44:16.244324002Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 20 17:44:16.244381 containerd[1491]: time="2025-03-20T17:44:16.244359468Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 20 17:44:16.244619 containerd[1491]: time="2025-03-20T17:44:16.244590640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 20 17:44:16.244659 containerd[1491]: time="2025-03-20T17:44:16.244652122Z" level=info msg="metadata content store policy set" policy=shared Mar 20 17:44:16.251848 containerd[1491]: time="2025-03-20T17:44:16.251779489Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 20 17:44:16.251848 containerd[1491]: time="2025-03-20T17:44:16.251845837Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251862726Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251875593Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251887455Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251897910Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251909933Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251921795Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251932652Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 20 17:44:16.251947 containerd[1491]: time="2025-03-20T17:44:16.251943388Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 20 17:44:16.252083 containerd[1491]: time="2025-03-20T17:44:16.251952918Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 20 17:44:16.252083 containerd[1491]: time="2025-03-20T17:44:16.251975436Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 20 17:44:16.252126 containerd[1491]: time="2025-03-20T17:44:16.252085694Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 20 17:44:16.252126 containerd[1491]: time="2025-03-20T17:44:16.252119552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 20 17:44:16.252159 containerd[1491]: time="2025-03-20T17:44:16.252134913Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 20 17:44:16.252159 containerd[1491]: time="2025-03-20T17:44:16.252146091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 20 17:44:16.252159 containerd[1491]: time="2025-03-20T17:44:16.252156747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 20 17:44:16.252209 containerd[1491]: time="2025-03-20T17:44:16.252167202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 20 17:44:16.252209 containerd[1491]: time="2025-03-20T17:44:16.252178984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 20 17:44:16.252209 containerd[1491]: time="2025-03-20T17:44:16.252195229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 20 17:44:16.252209 containerd[1491]: time="2025-03-20T17:44:16.252208378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 20 17:44:16.252278 containerd[1491]: time="2025-03-20T17:44:16.252219918Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 20 17:44:16.252278 containerd[1491]: time="2025-03-20T17:44:16.252230333Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 20 17:44:16.252506 containerd[1491]: time="2025-03-20T17:44:16.252480323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 20 17:44:16.252506 containerd[1491]: time="2025-03-20T17:44:16.252503766Z" level=info msg="Start snapshots syncer" Mar 20 17:44:16.252778 containerd[1491]: time="2025-03-20T17:44:16.252533241Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 20 17:44:16.252778 containerd[1491]: time="2025-03-20T17:44:16.252750982Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.252795214Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.252886091Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.252986095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253007165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253019148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253029161Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253041666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253051840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 20 17:44:16.253000 containerd[1491]: time="2025-03-20T17:44:16.253062214Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 20 17:44:16.253313 containerd[1491]: time="2025-03-20T17:44:16.253094302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 20 17:44:16.253313 containerd[1491]: time="2025-03-20T17:44:16.253126994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 20 17:44:16.253313 containerd[1491]: time="2025-03-20T17:44:16.253139178Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 20 17:44:16.253677 containerd[1491]: time="2025-03-20T17:44:16.253651865Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253681863Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253691755Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253701486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253709045Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253718977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 20 17:44:16.253777 containerd[1491]: time="2025-03-20T17:44:16.253731000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 20 17:44:16.254339 containerd[1491]: time="2025-03-20T17:44:16.253807883Z" level=info msg="runtime interface created" Mar 20 17:44:16.254339 containerd[1491]: time="2025-03-20T17:44:16.253813513Z" level=info msg="created NRI interface" Mar 20 17:44:16.254339 containerd[1491]: time="2025-03-20T17:44:16.253821595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 20 17:44:16.254339 containerd[1491]: time="2025-03-20T17:44:16.253863133Z" level=info msg="Connect containerd service" Mar 20 17:44:16.254339 containerd[1491]: time="2025-03-20T17:44:16.253894859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 20 17:44:16.254850 containerd[1491]: time="2025-03-20T17:44:16.254791480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 17:44:16.370165 tar[1475]: linux-arm64/LICENSE Mar 20 17:44:16.370165 tar[1475]: linux-arm64/README.md Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.376926761Z" level=info msg="Start subscribing containerd event" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.376962267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.376986715Z" level=info msg="Start recovering state" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377007987Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377076305Z" level=info msg="Start event monitor" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377091786Z" level=info msg="Start cni network conf syncer for default" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377099426Z" level=info msg="Start streaming server" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377108111Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377115269Z" level=info msg="runtime interface starting up..." Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377120738Z" level=info msg="starting plugins..." Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377134289Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 20 17:44:16.377329 containerd[1491]: time="2025-03-20T17:44:16.377253232Z" level=info msg="containerd successfully booted in 0.151231s" Mar 20 17:44:16.385935 systemd[1]: Started containerd.service - containerd container runtime. Mar 20 17:44:16.390858 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 20 17:44:16.739682 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 20 17:44:16.758644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 20 17:44:16.761374 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 20 17:44:16.781143 systemd[1]: issuegen.service: Deactivated successfully. Mar 20 17:44:16.781347 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 20 17:44:16.783821 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 20 17:44:16.803337 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 20 17:44:16.806093 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 20 17:44:16.808121 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 20 17:44:16.809471 systemd[1]: Reached target getty.target - Login Prompts. Mar 20 17:44:17.306147 systemd-networkd[1410]: eth0: Gained IPv6LL Mar 20 17:44:17.308866 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 20 17:44:17.311165 systemd[1]: Reached target network-online.target - Network is Online. Mar 20 17:44:17.313769 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 20 17:44:17.329230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:17.331509 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 20 17:44:17.347024 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 20 17:44:17.347934 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 20 17:44:17.349604 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 20 17:44:17.350968 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 20 17:44:17.854782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:17.856290 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 20 17:44:17.859414 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:44:17.861221 systemd[1]: Startup finished in 602ms (kernel) + 5.463s (initrd) + 3.736s (userspace) = 9.802s. Mar 20 17:44:18.290712 kubelet[1582]: E0320 17:44:18.290594 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:44:18.293406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:44:18.293550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:44:18.293870 systemd[1]: kubelet.service: Consumed 784ms CPU time, 235.5M memory peak. Mar 20 17:44:21.601269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 20 17:44:21.602430 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:49590.service - OpenSSH per-connection server daemon (10.0.0.1:49590). Mar 20 17:44:21.678858 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 49590 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:21.680766 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:21.690310 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 20 17:44:21.691510 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 20 17:44:21.696951 systemd-logind[1463]: New session 1 of user core. Mar 20 17:44:21.715897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 20 17:44:21.718819 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 20 17:44:21.737148 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 20 17:44:21.739412 systemd-logind[1463]: New session c1 of user core. Mar 20 17:44:21.845022 systemd[1599]: Queued start job for default target default.target. Mar 20 17:44:21.853847 systemd[1599]: Created slice app.slice - User Application Slice. Mar 20 17:44:21.853878 systemd[1599]: Reached target paths.target - Paths. Mar 20 17:44:21.853918 systemd[1599]: Reached target timers.target - Timers. Mar 20 17:44:21.855309 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 20 17:44:21.867752 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 20 17:44:21.867898 systemd[1599]: Reached target sockets.target - Sockets. Mar 20 17:44:21.867939 systemd[1599]: Reached target basic.target - Basic System. Mar 20 17:44:21.867972 systemd[1599]: Reached target default.target - Main User Target. Mar 20 17:44:21.867999 systemd[1599]: Startup finished in 122ms. Mar 20 17:44:21.868308 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 20 17:44:21.869686 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 20 17:44:21.929481 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:49594.service - OpenSSH per-connection server daemon (10.0.0.1:49594). Mar 20 17:44:21.997280 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 49594 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:21.998560 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.003902 systemd-logind[1463]: New session 2 of user core. Mar 20 17:44:22.011029 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 20 17:44:22.061867 sshd[1612]: Connection closed by 10.0.0.1 port 49594 Mar 20 17:44:22.062526 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:22.076255 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:49594.service: Deactivated successfully. Mar 20 17:44:22.077851 systemd[1]: session-2.scope: Deactivated successfully. Mar 20 17:44:22.078512 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Mar 20 17:44:22.080362 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:49604.service - OpenSSH per-connection server daemon (10.0.0.1:49604). Mar 20 17:44:22.082062 systemd-logind[1463]: Removed session 2. Mar 20 17:44:22.132949 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 49604 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:22.134128 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.137882 systemd-logind[1463]: New session 3 of user core. Mar 20 17:44:22.150022 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 20 17:44:22.197029 sshd[1620]: Connection closed by 10.0.0.1 port 49604 Mar 20 17:44:22.197494 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:22.211844 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:49604.service: Deactivated successfully. Mar 20 17:44:22.217017 systemd[1]: session-3.scope: Deactivated successfully. Mar 20 17:44:22.217605 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Mar 20 17:44:22.219317 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:49614.service - OpenSSH per-connection server daemon (10.0.0.1:49614). Mar 20 17:44:22.220162 systemd-logind[1463]: Removed session 3. Mar 20 17:44:22.280414 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:22.281517 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.285741 systemd-logind[1463]: New session 4 of user core. Mar 20 17:44:22.294984 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 20 17:44:22.346903 sshd[1628]: Connection closed by 10.0.0.1 port 49614 Mar 20 17:44:22.347371 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:22.361378 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:49614.service: Deactivated successfully. Mar 20 17:44:22.364218 systemd[1]: session-4.scope: Deactivated successfully. Mar 20 17:44:22.364799 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Mar 20 17:44:22.366500 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). Mar 20 17:44:22.367342 systemd-logind[1463]: Removed session 4. Mar 20 17:44:22.422317 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:22.423577 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.430969 systemd-logind[1463]: New session 5 of user core. Mar 20 17:44:22.446978 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 20 17:44:22.507569 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 20 17:44:22.507868 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:44:22.521847 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 20 17:44:22.524508 sshd[1636]: Connection closed by 10.0.0.1 port 49616 Mar 20 17:44:22.523839 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:22.533134 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:49616.service: Deactivated successfully. Mar 20 17:44:22.535175 systemd[1]: session-5.scope: Deactivated successfully. Mar 20 17:44:22.535837 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Mar 20 17:44:22.537708 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:49440.service - OpenSSH per-connection server daemon (10.0.0.1:49440). Mar 20 17:44:22.538439 systemd-logind[1463]: Removed session 5. Mar 20 17:44:22.603491 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 49440 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:22.604692 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.608380 systemd-logind[1463]: New session 6 of user core. Mar 20 17:44:22.615981 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 20 17:44:22.667036 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 20 17:44:22.667317 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:44:22.670356 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 20 17:44:22.674725 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 20 17:44:22.675921 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:44:22.683886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 20 17:44:22.717040 augenrules[1669]: No rules Mar 20 17:44:22.718357 systemd[1]: audit-rules.service: Deactivated successfully. Mar 20 17:44:22.718558 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 20 17:44:22.719436 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 20 17:44:22.720994 sshd[1645]: Connection closed by 10.0.0.1 port 49440 Mar 20 17:44:22.721375 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:22.727863 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:49440.service: Deactivated successfully. Mar 20 17:44:22.729189 systemd[1]: session-6.scope: Deactivated successfully. Mar 20 17:44:22.730076 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Mar 20 17:44:22.731626 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:49444.service - OpenSSH per-connection server daemon (10.0.0.1:49444). Mar 20 17:44:22.732493 systemd-logind[1463]: Removed session 6. Mar 20 17:44:22.783385 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 49444 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:44:22.784659 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:44:22.788880 systemd-logind[1463]: New session 7 of user core. Mar 20 17:44:22.794982 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 20 17:44:22.846429 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 20 17:44:22.846707 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 20 17:44:23.185010 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 20 17:44:23.198151 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 20 17:44:23.460866 dockerd[1702]: time="2025-03-20T17:44:23.460719345Z" level=info msg="Starting up" Mar 20 17:44:23.463505 dockerd[1702]: time="2025-03-20T17:44:23.463477807Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 20 17:44:23.599641 dockerd[1702]: time="2025-03-20T17:44:23.599599833Z" level=info msg="Loading containers: start." Mar 20 17:44:23.753886 kernel: Initializing XFRM netlink socket Mar 20 17:44:23.814537 systemd-networkd[1410]: docker0: Link UP Mar 20 17:44:23.876191 dockerd[1702]: time="2025-03-20T17:44:23.876087071Z" level=info msg="Loading containers: done." Mar 20 17:44:23.892861 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2657642631-merged.mount: Deactivated successfully. Mar 20 17:44:23.895376 dockerd[1702]: time="2025-03-20T17:44:23.894811674Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 20 17:44:23.895376 dockerd[1702]: time="2025-03-20T17:44:23.894930952Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 20 17:44:23.895376 dockerd[1702]: time="2025-03-20T17:44:23.895223450Z" level=info msg="Daemon has completed initialization" Mar 20 17:44:23.929088 dockerd[1702]: time="2025-03-20T17:44:23.929037945Z" level=info msg="API listen on /run/docker.sock" Mar 20 17:44:23.929194 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 20 17:44:24.800902 containerd[1491]: time="2025-03-20T17:44:24.800815153Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 20 17:44:25.382191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743249392.mount: Deactivated successfully. Mar 20 17:44:26.517866 containerd[1491]: time="2025-03-20T17:44:26.517804333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:26.518905 containerd[1491]: time="2025-03-20T17:44:26.518855631Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552768" Mar 20 17:44:26.519944 containerd[1491]: time="2025-03-20T17:44:26.519595430Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:26.522446 containerd[1491]: time="2025-03-20T17:44:26.522412917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:26.523897 containerd[1491]: time="2025-03-20T17:44:26.523867366Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 1.722983079s" Mar 20 17:44:26.523966 containerd[1491]: time="2025-03-20T17:44:26.523901861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 20 17:44:26.524559 containerd[1491]: time="2025-03-20T17:44:26.524532840Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 20 17:44:27.990892 containerd[1491]: time="2025-03-20T17:44:27.990847612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:27.991785 containerd[1491]: time="2025-03-20T17:44:27.991259116Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458980" Mar 20 17:44:27.992407 containerd[1491]: time="2025-03-20T17:44:27.992374198Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:27.995400 containerd[1491]: time="2025-03-20T17:44:27.995353338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:27.996344 containerd[1491]: time="2025-03-20T17:44:27.996208147Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 1.471639811s" Mar 20 17:44:27.996344 containerd[1491]: time="2025-03-20T17:44:27.996240230Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 20 17:44:27.997279 containerd[1491]: time="2025-03-20T17:44:27.996778541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 20 17:44:28.319889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 20 17:44:28.321295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:28.429076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:28.432254 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:44:28.464712 kubelet[1975]: E0320 17:44:28.464660 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:44:28.467939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:44:28.468196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:44:28.468666 systemd[1]: kubelet.service: Consumed 128ms CPU time, 96.9M memory peak. Mar 20 17:44:29.434880 containerd[1491]: time="2025-03-20T17:44:29.434128859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:29.435211 containerd[1491]: time="2025-03-20T17:44:29.434894599Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125831" Mar 20 17:44:29.435407 containerd[1491]: time="2025-03-20T17:44:29.435362543Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:29.438082 containerd[1491]: time="2025-03-20T17:44:29.438037341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:29.439880 containerd[1491]: time="2025-03-20T17:44:29.439842002Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 1.442702771s" Mar 20 17:44:29.439927 containerd[1491]: time="2025-03-20T17:44:29.439880970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 20 17:44:29.440313 containerd[1491]: time="2025-03-20T17:44:29.440279877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 20 17:44:30.521069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474330412.mount: Deactivated successfully. Mar 20 17:44:30.740950 containerd[1491]: time="2025-03-20T17:44:30.740709138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:30.743843 containerd[1491]: time="2025-03-20T17:44:30.741375999Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871917" Mar 20 17:44:30.743843 containerd[1491]: time="2025-03-20T17:44:30.742631113Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:30.745659 containerd[1491]: time="2025-03-20T17:44:30.745590659Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.305275424s" Mar 20 17:44:30.745659 containerd[1491]: time="2025-03-20T17:44:30.745639283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 20 17:44:30.745954 containerd[1491]: time="2025-03-20T17:44:30.745926615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:30.746298 containerd[1491]: time="2025-03-20T17:44:30.746272832Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 20 17:44:31.439178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246408357.mount: Deactivated successfully. Mar 20 17:44:32.233236 containerd[1491]: time="2025-03-20T17:44:32.233036400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:32.234156 containerd[1491]: time="2025-03-20T17:44:32.233978645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 20 17:44:32.234988 containerd[1491]: time="2025-03-20T17:44:32.234911031Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:32.237930 containerd[1491]: time="2025-03-20T17:44:32.237888007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:32.239016 containerd[1491]: time="2025-03-20T17:44:32.238989831Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.492680965s" Mar 20 17:44:32.239016 containerd[1491]: time="2025-03-20T17:44:32.239020008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 20 17:44:32.239487 containerd[1491]: time="2025-03-20T17:44:32.239450534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 20 17:44:32.702515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877021597.mount: Deactivated successfully. Mar 20 17:44:32.706284 containerd[1491]: time="2025-03-20T17:44:32.706232432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:44:32.706768 containerd[1491]: time="2025-03-20T17:44:32.706715978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 20 17:44:32.707663 containerd[1491]: time="2025-03-20T17:44:32.707628487Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:44:32.709542 containerd[1491]: time="2025-03-20T17:44:32.709512496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 20 17:44:32.710290 containerd[1491]: time="2025-03-20T17:44:32.710228117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 470.554084ms" Mar 20 17:44:32.710290 containerd[1491]: time="2025-03-20T17:44:32.710264545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 20 17:44:32.710750 containerd[1491]: time="2025-03-20T17:44:32.710650949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 20 17:44:33.239934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356095657.mount: Deactivated successfully. Mar 20 17:44:35.359310 containerd[1491]: time="2025-03-20T17:44:35.359234799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:35.359878 containerd[1491]: time="2025-03-20T17:44:35.359815896Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Mar 20 17:44:35.360539 containerd[1491]: time="2025-03-20T17:44:35.360512491Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:35.363368 containerd[1491]: time="2025-03-20T17:44:35.363321869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:35.364541 containerd[1491]: time="2025-03-20T17:44:35.364492837Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.653805061s" Mar 20 17:44:35.364541 containerd[1491]: time="2025-03-20T17:44:35.364531937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 20 17:44:38.570452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 20 17:44:38.571961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:38.676349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:38.679975 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 20 17:44:38.714666 kubelet[2127]: E0320 17:44:38.714619 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 20 17:44:38.717244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 20 17:44:38.717394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 20 17:44:38.719032 systemd[1]: kubelet.service: Consumed 125ms CPU time, 94.8M memory peak. Mar 20 17:44:40.379015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:40.379172 systemd[1]: kubelet.service: Consumed 125ms CPU time, 94.8M memory peak. Mar 20 17:44:40.381225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:40.404273 systemd[1]: Reload requested from client PID 2142 ('systemctl') (unit session-7.scope)... Mar 20 17:44:40.404290 systemd[1]: Reloading... Mar 20 17:44:40.469584 zram_generator::config[2184]: No configuration found. Mar 20 17:44:40.588072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:44:40.659030 systemd[1]: Reloading finished in 254 ms. Mar 20 17:44:40.704101 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 20 17:44:40.704168 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 20 17:44:40.704371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:40.704412 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. Mar 20 17:44:40.706374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:40.804115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:40.808033 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 17:44:40.842824 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:44:40.842824 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 17:44:40.842824 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:44:40.843167 kubelet[2232]: I0320 17:44:40.843021 2232 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 17:44:41.868857 kubelet[2232]: I0320 17:44:41.868279 2232 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 17:44:41.868857 kubelet[2232]: I0320 17:44:41.868317 2232 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 17:44:41.868857 kubelet[2232]: I0320 17:44:41.868566 2232 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 17:44:41.925281 kubelet[2232]: E0320 17:44:41.925195 2232 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 17:44:41.926431 kubelet[2232]: I0320 17:44:41.926359 2232 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 17:44:41.935371 kubelet[2232]: I0320 17:44:41.935342 2232 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 17:44:41.938990 kubelet[2232]: I0320 17:44:41.938962 2232 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 17:44:41.939285 kubelet[2232]: I0320 17:44:41.939272 2232 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 17:44:41.939413 kubelet[2232]: I0320 17:44:41.939386 2232 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 17:44:41.939591 kubelet[2232]: I0320 17:44:41.939414 2232 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 17:44:41.939774 kubelet[2232]: I0320 17:44:41.939763 2232 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 17:44:41.939774 kubelet[2232]: I0320 17:44:41.939774 2232 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 17:44:41.940104 kubelet[2232]: I0320 17:44:41.940089 2232 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:44:41.941586 kubelet[2232]: I0320 17:44:41.941554 2232 kubelet.go:408] "Attempting to sync node with API server" Mar 20 17:44:41.941586 kubelet[2232]: I0320 17:44:41.941582 2232 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 17:44:41.941652 kubelet[2232]: I0320 17:44:41.941610 2232 kubelet.go:314] "Adding apiserver pod source" Mar 20 17:44:41.941652 kubelet[2232]: I0320 17:44:41.941622 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 17:44:41.950984 kubelet[2232]: W0320 17:44:41.948160 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Mar 20 17:44:41.950984 kubelet[2232]: E0320 17:44:41.948236 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 17:44:41.950984 kubelet[2232]: I0320 17:44:41.949485 2232 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 17:44:41.951653 kubelet[2232]: I0320 17:44:41.951624 2232 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 17:44:41.951800 kubelet[2232]: W0320 17:44:41.951725 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Mar 20 17:44:41.951848 kubelet[2232]: E0320 17:44:41.951814 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 17:44:41.952573 kubelet[2232]: W0320 17:44:41.952537 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 20 17:44:41.953558 kubelet[2232]: I0320 17:44:41.953425 2232 server.go:1269] "Started kubelet" Mar 20 17:44:41.954460 kubelet[2232]: I0320 17:44:41.953882 2232 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 17:44:41.954601 kubelet[2232]: I0320 17:44:41.954550 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 17:44:41.954932 kubelet[2232]: I0320 17:44:41.954911 2232 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 17:44:41.955466 kubelet[2232]: I0320 17:44:41.955441 2232 server.go:460] "Adding debug handlers to kubelet server" Mar 20 17:44:41.958476 kubelet[2232]: I0320 17:44:41.958435 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 17:44:41.958617 kubelet[2232]: I0320 17:44:41.958594 2232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 17:44:41.959190 kubelet[2232]: E0320 17:44:41.958143 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e93dcb38cbdfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-20 17:44:41.953394172 +0000 UTC m=+1.142267664,LastTimestamp:2025-03-20 17:44:41.953394172 +0000 UTC m=+1.142267664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 20 17:44:41.959470 kubelet[2232]: I0320 17:44:41.959438 2232 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 17:44:41.959625 kubelet[2232]: I0320 17:44:41.959607 2232 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 17:44:41.959709 kubelet[2232]: I0320 17:44:41.959693 2232 reconciler.go:26] "Reconciler: start to sync state" Mar 20 17:44:41.960186 kubelet[2232]: W0320 17:44:41.960133 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Mar 20 17:44:41.960243 kubelet[2232]: E0320 17:44:41.960195 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 17:44:41.960585 kubelet[2232]: E0320 17:44:41.960560 2232 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:44:41.960585 kubelet[2232]: E0320 17:44:41.960558 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Mar 20 17:44:41.962447 kubelet[2232]: I0320 17:44:41.962416 2232 factory.go:221] Registration of the systemd container factory successfully Mar 20 17:44:41.962726 kubelet[2232]: I0320 17:44:41.962692 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 17:44:41.963612 kubelet[2232]: E0320 17:44:41.963546 2232 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 17:44:41.966647 kubelet[2232]: I0320 17:44:41.965478 2232 factory.go:221] Registration of the containerd container factory successfully Mar 20 17:44:41.975545 kubelet[2232]: I0320 17:44:41.975422 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 17:44:41.976676 kubelet[2232]: I0320 17:44:41.976638 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 17:44:41.976676 kubelet[2232]: I0320 17:44:41.976666 2232 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 17:44:41.976770 kubelet[2232]: I0320 17:44:41.976687 2232 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 17:44:41.976770 kubelet[2232]: E0320 17:44:41.976729 2232 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 17:44:41.980188 kubelet[2232]: I0320 17:44:41.980153 2232 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 17:44:41.980188 kubelet[2232]: I0320 17:44:41.980185 2232 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 17:44:41.980316 kubelet[2232]: I0320 17:44:41.980205 2232 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:44:41.980388 kubelet[2232]: W0320 17:44:41.980340 2232 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Mar 20 17:44:41.980414 kubelet[2232]: E0320 17:44:41.980400 2232 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 17:44:42.048444 kubelet[2232]: I0320 17:44:42.048403 2232 policy_none.go:49] "None policy: Start" Mar 20 17:44:42.049178 kubelet[2232]: I0320 17:44:42.049157 2232 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 17:44:42.049271 kubelet[2232]: I0320 17:44:42.049189 2232 state_mem.go:35] "Initializing new in-memory state store" Mar 20 17:44:42.057428 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 20 17:44:42.061474 kubelet[2232]: E0320 17:44:42.061441 2232 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:44:42.068297 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 20 17:44:42.071516 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 20 17:44:42.077317 kubelet[2232]: E0320 17:44:42.077289 2232 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 17:44:42.088004 kubelet[2232]: I0320 17:44:42.087843 2232 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 17:44:42.088106 kubelet[2232]: I0320 17:44:42.088059 2232 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 17:44:42.088106 kubelet[2232]: I0320 17:44:42.088071 2232 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 17:44:42.088531 kubelet[2232]: I0320 17:44:42.088314 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 17:44:42.090515 kubelet[2232]: E0320 17:44:42.090338 2232 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 20 17:44:42.161129 kubelet[2232]: E0320 17:44:42.161007 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Mar 20 17:44:42.190288 kubelet[2232]: I0320 17:44:42.190250 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 17:44:42.190774 kubelet[2232]: E0320 17:44:42.190737 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Mar 20 17:44:42.286922 systemd[1]: Created slice kubepods-burstable-pode43381377e5926128534bc3845d1c841.slice - libcontainer container kubepods-burstable-pode43381377e5926128534bc3845d1c841.slice. Mar 20 17:44:42.302643 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 20 17:44:42.316528 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 20 17:44:42.361259 kubelet[2232]: I0320 17:44:42.361219 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:42.361259 kubelet[2232]: I0320 17:44:42.361257 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:42.361259 kubelet[2232]: I0320 17:44:42.361278 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:42.361436 kubelet[2232]: I0320 17:44:42.361293 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 17:44:42.361436 kubelet[2232]: I0320 17:44:42.361309 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:42.361436 kubelet[2232]: I0320 17:44:42.361323 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:42.361436 kubelet[2232]: I0320 17:44:42.361337 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:42.361436 kubelet[2232]: I0320 17:44:42.361351 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:42.361545 kubelet[2232]: I0320 17:44:42.361368 2232 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:42.392368 kubelet[2232]: I0320 17:44:42.392327 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 17:44:42.392668 kubelet[2232]: E0320 17:44:42.392643 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Mar 20 17:44:42.562324 kubelet[2232]: E0320 17:44:42.562202 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Mar 20 17:44:42.601220 containerd[1491]: time="2025-03-20T17:44:42.601170766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e43381377e5926128534bc3845d1c841,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:42.605781 containerd[1491]: time="2025-03-20T17:44:42.605736095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:42.620363 containerd[1491]: time="2025-03-20T17:44:42.620319834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:42.624020 containerd[1491]: time="2025-03-20T17:44:42.623981675Z" level=info msg="connecting to shim 90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980" address="unix:///run/containerd/s/03e1442c017ae8d8875af664f30407baa7fc1901b1bdcb47d75d1f3c29cdd7af" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:42.632647 containerd[1491]: time="2025-03-20T17:44:42.632598988Z" level=info msg="connecting to shim f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8" address="unix:///run/containerd/s/74b18ee01912f273c286c202cda4304070431941573dbaf698974e8b74d5c512" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:42.650629 containerd[1491]: time="2025-03-20T17:44:42.650151566Z" level=info msg="connecting to shim d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5" address="unix:///run/containerd/s/aacd22d16536e5499a007d1a149d5540b056f17682d009847ae1b746a86f2e07" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:42.653300 systemd[1]: Started cri-containerd-90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980.scope - libcontainer container 90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980. Mar 20 17:44:42.660802 systemd[1]: Started cri-containerd-f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8.scope - libcontainer container f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8. Mar 20 17:44:42.682066 systemd[1]: Started cri-containerd-d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5.scope - libcontainer container d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5. Mar 20 17:44:42.698317 containerd[1491]: time="2025-03-20T17:44:42.698245735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e43381377e5926128534bc3845d1c841,Namespace:kube-system,Attempt:0,} returns sandbox id \"90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980\"" Mar 20 17:44:42.703403 containerd[1491]: time="2025-03-20T17:44:42.703333578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8\"" Mar 20 17:44:42.704556 containerd[1491]: time="2025-03-20T17:44:42.704522547Z" level=info msg="CreateContainer within sandbox \"90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 20 17:44:42.706904 containerd[1491]: time="2025-03-20T17:44:42.706438431Z" level=info msg="CreateContainer within sandbox \"f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 20 17:44:42.715883 containerd[1491]: time="2025-03-20T17:44:42.715839154Z" level=info msg="Container 8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:42.716390 containerd[1491]: time="2025-03-20T17:44:42.716359466Z" level=info msg="Container 4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:42.724600 containerd[1491]: time="2025-03-20T17:44:42.724544754Z" level=info msg="CreateContainer within sandbox \"f7b2bae3df4ed0385a0c1b632a4470a283346993ff40308480541bab8942a4d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42\"" Mar 20 17:44:42.725386 containerd[1491]: time="2025-03-20T17:44:42.725354230Z" level=info msg="StartContainer for \"4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42\"" Mar 20 17:44:42.726036 containerd[1491]: time="2025-03-20T17:44:42.726009274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5\"" Mar 20 17:44:42.726810 containerd[1491]: time="2025-03-20T17:44:42.726775908Z" level=info msg="connecting to shim 4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42" address="unix:///run/containerd/s/74b18ee01912f273c286c202cda4304070431941573dbaf698974e8b74d5c512" protocol=ttrpc version=3 Mar 20 17:44:42.729022 containerd[1491]: time="2025-03-20T17:44:42.728981476Z" level=info msg="CreateContainer within sandbox \"90a476e0f5cebeb0a6d05254877af9ba017f17455a3b5fef55ad0ac1705ec980\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86\"" Mar 20 17:44:42.729357 containerd[1491]: time="2025-03-20T17:44:42.729334984Z" level=info msg="StartContainer for \"8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86\"" Mar 20 17:44:42.729408 containerd[1491]: time="2025-03-20T17:44:42.729373822Z" level=info msg="CreateContainer within sandbox \"d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 20 17:44:42.730441 containerd[1491]: time="2025-03-20T17:44:42.730410481Z" level=info msg="connecting to shim 8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86" address="unix:///run/containerd/s/03e1442c017ae8d8875af664f30407baa7fc1901b1bdcb47d75d1f3c29cdd7af" protocol=ttrpc version=3 Mar 20 17:44:42.736521 containerd[1491]: time="2025-03-20T17:44:42.736408339Z" level=info msg="Container 22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:42.744560 containerd[1491]: time="2025-03-20T17:44:42.744509184Z" level=info msg="CreateContainer within sandbox \"d1e8df3dcf1985fce1c1d3188361744e3372f4c02a12dab57db8cf1695833bb5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8\"" Mar 20 17:44:42.745096 containerd[1491]: time="2025-03-20T17:44:42.745060686Z" level=info msg="StartContainer for \"22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8\"" Mar 20 17:44:42.745991 systemd[1]: Started cri-containerd-4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42.scope - libcontainer container 4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42. Mar 20 17:44:42.747369 containerd[1491]: time="2025-03-20T17:44:42.747341649Z" level=info msg="connecting to shim 22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8" address="unix:///run/containerd/s/aacd22d16536e5499a007d1a149d5540b056f17682d009847ae1b746a86f2e07" protocol=ttrpc version=3 Mar 20 17:44:42.748923 systemd[1]: Started cri-containerd-8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86.scope - libcontainer container 8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86. Mar 20 17:44:42.772011 systemd[1]: Started cri-containerd-22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8.scope - libcontainer container 22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8. Mar 20 17:44:42.791008 containerd[1491]: time="2025-03-20T17:44:42.790857956Z" level=info msg="StartContainer for \"4f0e5e90d35419095a286210aeb9f2350b4cc161d433ba0912f0eae15c613f42\" returns successfully" Mar 20 17:44:42.794826 kubelet[2232]: I0320 17:44:42.794793 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 17:44:42.795212 kubelet[2232]: E0320 17:44:42.795179 2232 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Mar 20 17:44:42.806571 containerd[1491]: time="2025-03-20T17:44:42.806530046Z" level=info msg="StartContainer for \"8aeff1fc40f24659e455c82413a06cd897be2bf2d8d5f77c3e063acf17137e86\" returns successfully" Mar 20 17:44:42.834441 containerd[1491]: time="2025-03-20T17:44:42.834360570Z" level=info msg="StartContainer for \"22e62da222b2f769bd2534cfc77963d0e87ac31acadecbe4e6e09f1f01da13b8\" returns successfully" Mar 20 17:44:43.597255 kubelet[2232]: I0320 17:44:43.597175 2232 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 17:44:44.809031 kubelet[2232]: E0320 17:44:44.808995 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 20 17:44:44.887683 kubelet[2232]: I0320 17:44:44.887651 2232 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 17:44:44.949493 kubelet[2232]: I0320 17:44:44.949448 2232 apiserver.go:52] "Watching apiserver" Mar 20 17:44:44.962874 kubelet[2232]: I0320 17:44:44.960124 2232 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 17:44:46.758550 systemd[1]: Reload requested from client PID 2507 ('systemctl') (unit session-7.scope)... Mar 20 17:44:46.758566 systemd[1]: Reloading... Mar 20 17:44:46.832908 zram_generator::config[2555]: No configuration found. Mar 20 17:44:46.909870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 20 17:44:46.999309 systemd[1]: Reloading finished in 240 ms. Mar 20 17:44:47.021214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:47.037215 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 17:44:47.037423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:47.037478 systemd[1]: kubelet.service: Consumed 1.559s CPU time, 117.8M memory peak. Mar 20 17:44:47.039801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 20 17:44:47.168554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 20 17:44:47.172035 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 20 17:44:47.207366 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:44:47.207366 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 17:44:47.207366 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 17:44:47.207721 kubelet[2593]: I0320 17:44:47.207408 2593 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 17:44:47.219898 kubelet[2593]: I0320 17:44:47.219860 2593 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 20 17:44:47.219898 kubelet[2593]: I0320 17:44:47.219892 2593 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 17:44:47.220156 kubelet[2593]: I0320 17:44:47.220137 2593 server.go:929] "Client rotation is on, will bootstrap in background" Mar 20 17:44:47.221562 kubelet[2593]: I0320 17:44:47.221533 2593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 17:44:47.223726 kubelet[2593]: I0320 17:44:47.223603 2593 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 20 17:44:47.227574 kubelet[2593]: I0320 17:44:47.227543 2593 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 17:44:47.230153 kubelet[2593]: I0320 17:44:47.230122 2593 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 20 17:44:47.230294 kubelet[2593]: I0320 17:44:47.230267 2593 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 17:44:47.230398 kubelet[2593]: I0320 17:44:47.230371 2593 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 17:44:47.230581 kubelet[2593]: I0320 17:44:47.230401 2593 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 17:44:47.230669 kubelet[2593]: I0320 17:44:47.230589 2593 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 17:44:47.230669 kubelet[2593]: I0320 17:44:47.230603 2593 container_manager_linux.go:300] "Creating device plugin manager" Mar 20 17:44:47.230669 kubelet[2593]: I0320 17:44:47.230637 2593 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:44:47.230752 kubelet[2593]: I0320 17:44:47.230739 2593 kubelet.go:408] "Attempting to sync node with API server" Mar 20 17:44:47.230776 kubelet[2593]: I0320 17:44:47.230755 2593 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 17:44:47.230797 kubelet[2593]: I0320 17:44:47.230776 2593 kubelet.go:314] "Adding apiserver pod source" Mar 20 17:44:47.230797 kubelet[2593]: I0320 17:44:47.230785 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 17:44:47.231939 kubelet[2593]: I0320 17:44:47.231911 2593 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 20 17:44:47.232477 kubelet[2593]: I0320 17:44:47.232438 2593 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 17:44:47.234774 kubelet[2593]: I0320 17:44:47.232945 2593 server.go:1269] "Started kubelet" Mar 20 17:44:47.234774 kubelet[2593]: I0320 17:44:47.233211 2593 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 17:44:47.234774 kubelet[2593]: I0320 17:44:47.234384 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 17:44:47.234774 kubelet[2593]: I0320 17:44:47.234397 2593 server.go:460] "Adding debug handlers to kubelet server" Mar 20 17:44:47.236783 kubelet[2593]: I0320 17:44:47.236682 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 17:44:47.238280 kubelet[2593]: I0320 17:44:47.237389 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 20 17:44:47.238792 kubelet[2593]: E0320 17:44:47.238482 2593 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 20 17:44:47.239514 kubelet[2593]: I0320 17:44:47.239480 2593 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 17:44:47.239700 kubelet[2593]: I0320 17:44:47.239664 2593 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 17:44:47.239977 kubelet[2593]: I0320 17:44:47.239953 2593 reconciler.go:26] "Reconciler: start to sync state" Mar 20 17:44:47.240704 kubelet[2593]: I0320 17:44:47.240649 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 20 17:44:47.241665 kubelet[2593]: I0320 17:44:47.241629 2593 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 17:44:47.242251 kubelet[2593]: E0320 17:44:47.242229 2593 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 20 17:44:47.242470 kubelet[2593]: I0320 17:44:47.242428 2593 factory.go:221] Registration of the containerd container factory successfully Mar 20 17:44:47.242470 kubelet[2593]: I0320 17:44:47.242448 2593 factory.go:221] Registration of the systemd container factory successfully Mar 20 17:44:47.250774 kubelet[2593]: I0320 17:44:47.250727 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 17:44:47.251656 kubelet[2593]: I0320 17:44:47.251557 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 17:44:47.251656 kubelet[2593]: I0320 17:44:47.251578 2593 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 17:44:47.251656 kubelet[2593]: I0320 17:44:47.251599 2593 kubelet.go:2321] "Starting kubelet main sync loop" Mar 20 17:44:47.251656 kubelet[2593]: E0320 17:44:47.251639 2593 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 17:44:47.297858 kubelet[2593]: I0320 17:44:47.296931 2593 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 20 17:44:47.297858 kubelet[2593]: I0320 17:44:47.296952 2593 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 20 17:44:47.297858 kubelet[2593]: I0320 17:44:47.296974 2593 state_mem.go:36] "Initialized new in-memory state store" Mar 20 17:44:47.298056 kubelet[2593]: I0320 17:44:47.298028 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 17:44:47.298090 kubelet[2593]: I0320 17:44:47.298050 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 17:44:47.298090 kubelet[2593]: I0320 17:44:47.298070 2593 policy_none.go:49] "None policy: Start" Mar 20 17:44:47.298782 kubelet[2593]: I0320 17:44:47.298762 2593 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 17:44:47.298782 kubelet[2593]: I0320 17:44:47.298788 2593 state_mem.go:35] "Initializing new in-memory state store" Mar 20 17:44:47.299008 kubelet[2593]: I0320 17:44:47.298992 2593 state_mem.go:75] "Updated machine memory state" Mar 20 17:44:47.302734 kubelet[2593]: I0320 17:44:47.302681 2593 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 17:44:47.302987 kubelet[2593]: I0320 17:44:47.302895 2593 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 17:44:47.302987 kubelet[2593]: I0320 17:44:47.302911 2593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 17:44:47.303103 kubelet[2593]: I0320 17:44:47.303081 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 17:44:47.405517 kubelet[2593]: I0320 17:44:47.405187 2593 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 20 17:44:47.410796 kubelet[2593]: I0320 17:44:47.410765 2593 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 20 17:44:47.410907 kubelet[2593]: I0320 17:44:47.410865 2593 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 20 17:44:47.541533 kubelet[2593]: I0320 17:44:47.541489 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:47.541533 kubelet[2593]: I0320 17:44:47.541533 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:47.541688 kubelet[2593]: I0320 17:44:47.541554 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 20 17:44:47.541688 kubelet[2593]: I0320 17:44:47.541572 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:47.541688 kubelet[2593]: I0320 17:44:47.541588 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:47.541688 kubelet[2593]: I0320 17:44:47.541604 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:47.541688 kubelet[2593]: I0320 17:44:47.541617 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 20 17:44:47.541803 kubelet[2593]: I0320 17:44:47.541634 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:47.541803 kubelet[2593]: I0320 17:44:47.541680 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43381377e5926128534bc3845d1c841-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e43381377e5926128534bc3845d1c841\") " pod="kube-system/kube-apiserver-localhost" Mar 20 17:44:47.766294 sudo[2625]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 20 17:44:47.766586 sudo[2625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 20 17:44:48.211452 sudo[2625]: pam_unix(sudo:session): session closed for user root Mar 20 17:44:48.232263 kubelet[2593]: I0320 17:44:48.231940 2593 apiserver.go:52] "Watching apiserver" Mar 20 17:44:48.239840 kubelet[2593]: I0320 17:44:48.239800 2593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 17:44:48.304941 kubelet[2593]: I0320 17:44:48.304711 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.304692124 podStartE2EDuration="1.304692124s" podCreationTimestamp="2025-03-20 17:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:44:48.304030001 +0000 UTC m=+1.129288620" watchObservedRunningTime="2025-03-20 17:44:48.304692124 +0000 UTC m=+1.129950743" Mar 20 17:44:48.316841 kubelet[2593]: I0320 17:44:48.316704 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.316688935 podStartE2EDuration="1.316688935s" podCreationTimestamp="2025-03-20 17:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:44:48.310795919 +0000 UTC m=+1.136054539" watchObservedRunningTime="2025-03-20 17:44:48.316688935 +0000 UTC m=+1.141947514" Mar 20 17:44:48.326898 kubelet[2593]: I0320 17:44:48.325078 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.325064007 podStartE2EDuration="1.325064007s" podCreationTimestamp="2025-03-20 17:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:44:48.317142838 +0000 UTC m=+1.142401457" watchObservedRunningTime="2025-03-20 17:44:48.325064007 +0000 UTC m=+1.150322586" Mar 20 17:44:50.385662 sudo[1681]: pam_unix(sudo:session): session closed for user root Mar 20 17:44:50.386884 sshd[1680]: Connection closed by 10.0.0.1 port 49444 Mar 20 17:44:50.387300 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Mar 20 17:44:50.390809 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:49444.service: Deactivated successfully. Mar 20 17:44:50.392656 systemd[1]: session-7.scope: Deactivated successfully. Mar 20 17:44:50.392901 systemd[1]: session-7.scope: Consumed 7.267s CPU time, 262.7M memory peak. Mar 20 17:44:50.394336 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Mar 20 17:44:50.395158 systemd-logind[1463]: Removed session 7. Mar 20 17:44:51.612914 kubelet[2593]: I0320 17:44:51.612852 2593 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 20 17:44:51.613253 containerd[1491]: time="2025-03-20T17:44:51.613148204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 20 17:44:51.613447 kubelet[2593]: I0320 17:44:51.613335 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 20 17:44:52.316979 systemd[1]: Created slice kubepods-besteffort-pod0aa798c0_422f_4db9_91d3_83a896d302f8.slice - libcontainer container kubepods-besteffort-pod0aa798c0_422f_4db9_91d3_83a896d302f8.slice. Mar 20 17:44:52.327080 systemd[1]: Created slice kubepods-burstable-podd1fa6893_72e2_4c5c_be2d_ab1ceb6877ea.slice - libcontainer container kubepods-burstable-podd1fa6893_72e2_4c5c_be2d_ab1ceb6877ea.slice. Mar 20 17:44:52.374411 kubelet[2593]: I0320 17:44:52.374342 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hubble-tls\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.374411 kubelet[2593]: I0320 17:44:52.374391 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-cgroup\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.374411 kubelet[2593]: I0320 17:44:52.374409 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-etc-cni-netd\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.374411 kubelet[2593]: I0320 17:44:52.374423 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-lib-modules\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374438 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-xtables-lock\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374452 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-net\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374468 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-bpf-maps\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374482 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0aa798c0-422f-4db9-91d3-83a896d302f8-kube-proxy\") pod \"kube-proxy-xchhj\" (UID: \"0aa798c0-422f-4db9-91d3-83a896d302f8\") " pod="kube-system/kube-proxy-xchhj" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374499 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-clustermesh-secrets\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375021 kubelet[2593]: I0320 17:44:52.374563 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aa798c0-422f-4db9-91d3-83a896d302f8-lib-modules\") pod \"kube-proxy-xchhj\" (UID: \"0aa798c0-422f-4db9-91d3-83a896d302f8\") " pod="kube-system/kube-proxy-xchhj" Mar 20 17:44:52.375145 kubelet[2593]: I0320 17:44:52.374610 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-config-path\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375145 kubelet[2593]: I0320 17:44:52.374630 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-kernel\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375145 kubelet[2593]: I0320 17:44:52.374645 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aa798c0-422f-4db9-91d3-83a896d302f8-xtables-lock\") pod \"kube-proxy-xchhj\" (UID: \"0aa798c0-422f-4db9-91d3-83a896d302f8\") " pod="kube-system/kube-proxy-xchhj" Mar 20 17:44:52.375145 kubelet[2593]: I0320 17:44:52.374659 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hostproc\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375145 kubelet[2593]: I0320 17:44:52.374681 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsjx\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-kube-api-access-ldsjx\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375242 kubelet[2593]: I0320 17:44:52.374706 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfql\" (UniqueName: \"kubernetes.io/projected/0aa798c0-422f-4db9-91d3-83a896d302f8-kube-api-access-hlfql\") pod \"kube-proxy-xchhj\" (UID: \"0aa798c0-422f-4db9-91d3-83a896d302f8\") " pod="kube-system/kube-proxy-xchhj" Mar 20 17:44:52.375242 kubelet[2593]: I0320 17:44:52.374733 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-run\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.375242 kubelet[2593]: I0320 17:44:52.374772 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cni-path\") pod \"cilium-6qk8v\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " pod="kube-system/cilium-6qk8v" Mar 20 17:44:52.627080 containerd[1491]: time="2025-03-20T17:44:52.626967926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xchhj,Uid:0aa798c0-422f-4db9-91d3-83a896d302f8,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:52.634877 containerd[1491]: time="2025-03-20T17:44:52.634696513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qk8v,Uid:d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:52.686874 systemd[1]: Created slice kubepods-besteffort-poddb01354d_8723_443e_ba90_acfdb0e66bd9.slice - libcontainer container kubepods-besteffort-poddb01354d_8723_443e_ba90_acfdb0e66bd9.slice. Mar 20 17:44:52.706502 containerd[1491]: time="2025-03-20T17:44:52.705521053Z" level=info msg="connecting to shim 1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac" address="unix:///run/containerd/s/5a1e0db68a7cfa828b2df52c37ec2ac10fd641014385b90d59787437c3aaba53" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:52.708517 containerd[1491]: time="2025-03-20T17:44:52.708477818Z" level=info msg="connecting to shim d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:52.727012 systemd[1]: Started cri-containerd-1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac.scope - libcontainer container 1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac. Mar 20 17:44:52.729599 systemd[1]: Started cri-containerd-d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da.scope - libcontainer container d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da. Mar 20 17:44:52.752086 containerd[1491]: time="2025-03-20T17:44:52.751942122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qk8v,Uid:d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\"" Mar 20 17:44:52.753813 containerd[1491]: time="2025-03-20T17:44:52.753479435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xchhj,Uid:0aa798c0-422f-4db9-91d3-83a896d302f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac\"" Mar 20 17:44:52.757318 containerd[1491]: time="2025-03-20T17:44:52.757280516Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 20 17:44:52.758804 containerd[1491]: time="2025-03-20T17:44:52.757282477Z" level=info msg="CreateContainer within sandbox \"1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 20 17:44:52.764618 containerd[1491]: time="2025-03-20T17:44:52.764578601Z" level=info msg="Container 9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:52.772667 containerd[1491]: time="2025-03-20T17:44:52.772627194Z" level=info msg="CreateContainer within sandbox \"1b499067418b393c1595093e6343aaa426482be2adb0861c4c2a8c6b157f12ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7\"" Mar 20 17:44:52.773317 containerd[1491]: time="2025-03-20T17:44:52.773201210Z" level=info msg="StartContainer for \"9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7\"" Mar 20 17:44:52.774865 containerd[1491]: time="2025-03-20T17:44:52.774836414Z" level=info msg="connecting to shim 9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7" address="unix:///run/containerd/s/5a1e0db68a7cfa828b2df52c37ec2ac10fd641014385b90d59787437c3aaba53" protocol=ttrpc version=3 Mar 20 17:44:52.776685 kubelet[2593]: I0320 17:44:52.776657 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jlj\" (UniqueName: \"kubernetes.io/projected/db01354d-8723-443e-ba90-acfdb0e66bd9-kube-api-access-l6jlj\") pod \"cilium-operator-5d85765b45-dqfwc\" (UID: \"db01354d-8723-443e-ba90-acfdb0e66bd9\") " pod="kube-system/cilium-operator-5d85765b45-dqfwc" Mar 20 17:44:52.776965 kubelet[2593]: I0320 17:44:52.776697 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db01354d-8723-443e-ba90-acfdb0e66bd9-cilium-config-path\") pod \"cilium-operator-5d85765b45-dqfwc\" (UID: \"db01354d-8723-443e-ba90-acfdb0e66bd9\") " pod="kube-system/cilium-operator-5d85765b45-dqfwc" Mar 20 17:44:52.793960 systemd[1]: Started cri-containerd-9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7.scope - libcontainer container 9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7. Mar 20 17:44:52.824360 containerd[1491]: time="2025-03-20T17:44:52.824325666Z" level=info msg="StartContainer for \"9ef557688b598f42f42422c8ae3709f287dc383cca64c4e2148c772886bcbdb7\" returns successfully" Mar 20 17:44:52.994015 containerd[1491]: time="2025-03-20T17:44:52.993888267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dqfwc,Uid:db01354d-8723-443e-ba90-acfdb0e66bd9,Namespace:kube-system,Attempt:0,}" Mar 20 17:44:53.009818 containerd[1491]: time="2025-03-20T17:44:53.009771533Z" level=info msg="connecting to shim 205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52" address="unix:///run/containerd/s/d78df5e956a2e6944b174dd9bd5375c845f1e47847d2495b276aed7a6d33bced" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:44:53.036073 systemd[1]: Started cri-containerd-205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52.scope - libcontainer container 205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52. Mar 20 17:44:53.063198 containerd[1491]: time="2025-03-20T17:44:53.063162237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dqfwc,Uid:db01354d-8723-443e-ba90-acfdb0e66bd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\"" Mar 20 17:44:54.698691 kubelet[2593]: I0320 17:44:54.698554 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xchhj" podStartSLOduration=2.698540439 podStartE2EDuration="2.698540439s" podCreationTimestamp="2025-03-20 17:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:44:53.298232978 +0000 UTC m=+6.123491597" watchObservedRunningTime="2025-03-20 17:44:54.698540439 +0000 UTC m=+7.523799058" Mar 20 17:44:57.302404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484718793.mount: Deactivated successfully. Mar 20 17:44:58.582979 containerd[1491]: time="2025-03-20T17:44:58.582930610Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:58.583941 containerd[1491]: time="2025-03-20T17:44:58.583770825Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 20 17:44:58.587006 containerd[1491]: time="2025-03-20T17:44:58.586972186Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:44:58.588505 containerd[1491]: time="2025-03-20T17:44:58.588476913Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.831153697s" Mar 20 17:44:58.588569 containerd[1491]: time="2025-03-20T17:44:58.588509325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 20 17:44:58.591371 containerd[1491]: time="2025-03-20T17:44:58.591282976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 20 17:44:58.592046 containerd[1491]: time="2025-03-20T17:44:58.592015753Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 17:44:58.646093 containerd[1491]: time="2025-03-20T17:44:58.646042921Z" level=info msg="Container 4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:58.649559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505344928.mount: Deactivated successfully. Mar 20 17:44:58.654378 containerd[1491]: time="2025-03-20T17:44:58.654328984Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\"" Mar 20 17:44:58.655265 containerd[1491]: time="2025-03-20T17:44:58.655233220Z" level=info msg="StartContainer for \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\"" Mar 20 17:44:58.656125 containerd[1491]: time="2025-03-20T17:44:58.656099844Z" level=info msg="connecting to shim 4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" protocol=ttrpc version=3 Mar 20 17:44:58.701091 systemd[1]: Started cri-containerd-4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a.scope - libcontainer container 4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a. Mar 20 17:44:58.728732 containerd[1491]: time="2025-03-20T17:44:58.728119795Z" level=info msg="StartContainer for \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" returns successfully" Mar 20 17:44:58.779255 systemd[1]: cri-containerd-4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a.scope: Deactivated successfully. Mar 20 17:44:58.779585 systemd[1]: cri-containerd-4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a.scope: Consumed 65ms CPU time, 8.7M memory peak, 3.1M written to disk. Mar 20 17:44:58.808216 containerd[1491]: time="2025-03-20T17:44:58.808162436Z" level=info msg="received exit event container_id:\"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" id:\"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" pid:3008 exited_at:{seconds:1742492698 nanos:799419333}" Mar 20 17:44:58.808355 containerd[1491]: time="2025-03-20T17:44:58.808259550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" id:\"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" pid:3008 exited_at:{seconds:1742492698 nanos:799419333}" Mar 20 17:44:58.845872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a-rootfs.mount: Deactivated successfully. Mar 20 17:44:59.318083 containerd[1491]: time="2025-03-20T17:44:59.317983290Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 17:44:59.327845 containerd[1491]: time="2025-03-20T17:44:59.327588324Z" level=info msg="Container 6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:44:59.333977 containerd[1491]: time="2025-03-20T17:44:59.333927767Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\"" Mar 20 17:44:59.334364 containerd[1491]: time="2025-03-20T17:44:59.334331579Z" level=info msg="StartContainer for \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\"" Mar 20 17:44:59.335364 containerd[1491]: time="2025-03-20T17:44:59.335283452Z" level=info msg="connecting to shim 6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" protocol=ttrpc version=3 Mar 20 17:44:59.352979 systemd[1]: Started cri-containerd-6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76.scope - libcontainer container 6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76. Mar 20 17:44:59.376584 containerd[1491]: time="2025-03-20T17:44:59.376531840Z" level=info msg="StartContainer for \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" returns successfully" Mar 20 17:44:59.387710 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 20 17:44:59.388112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:44:59.388395 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:44:59.389624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 20 17:44:59.389958 systemd[1]: cri-containerd-6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76.scope: Deactivated successfully. Mar 20 17:44:59.392091 containerd[1491]: time="2025-03-20T17:44:59.392047336Z" level=info msg="received exit event container_id:\"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" id:\"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" pid:3051 exited_at:{seconds:1742492699 nanos:390695732}" Mar 20 17:44:59.392167 containerd[1491]: time="2025-03-20T17:44:59.392147408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" id:\"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" pid:3051 exited_at:{seconds:1742492699 nanos:390695732}" Mar 20 17:44:59.423663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 20 17:45:00.048539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929929853.mount: Deactivated successfully. Mar 20 17:45:00.323790 containerd[1491]: time="2025-03-20T17:45:00.322992208Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 17:45:00.336381 containerd[1491]: time="2025-03-20T17:45:00.336340758Z" level=info msg="Container 34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:00.344044 containerd[1491]: time="2025-03-20T17:45:00.343929335Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\"" Mar 20 17:45:00.344572 containerd[1491]: time="2025-03-20T17:45:00.344449775Z" level=info msg="StartContainer for \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\"" Mar 20 17:45:00.348233 containerd[1491]: time="2025-03-20T17:45:00.347717901Z" level=info msg="connecting to shim 34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" protocol=ttrpc version=3 Mar 20 17:45:00.373985 systemd[1]: Started cri-containerd-34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c.scope - libcontainer container 34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c. Mar 20 17:45:00.410463 containerd[1491]: time="2025-03-20T17:45:00.410406404Z" level=info msg="StartContainer for \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" returns successfully" Mar 20 17:45:00.418290 systemd[1]: cri-containerd-34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c.scope: Deactivated successfully. Mar 20 17:45:00.420544 containerd[1491]: time="2025-03-20T17:45:00.420442815Z" level=info msg="received exit event container_id:\"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" id:\"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" pid:3112 exited_at:{seconds:1742492700 nanos:420209183}" Mar 20 17:45:00.420770 containerd[1491]: time="2025-03-20T17:45:00.420691371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" id:\"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" pid:3112 exited_at:{seconds:1742492700 nanos:420209183}" Mar 20 17:45:00.571578 containerd[1491]: time="2025-03-20T17:45:00.571380092Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:45:00.572118 containerd[1491]: time="2025-03-20T17:45:00.572072465Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 20 17:45:00.572838 containerd[1491]: time="2025-03-20T17:45:00.572768439Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 20 17:45:00.574114 containerd[1491]: time="2025-03-20T17:45:00.573907150Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.9825838s" Mar 20 17:45:00.574114 containerd[1491]: time="2025-03-20T17:45:00.573941641Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 20 17:45:00.575952 containerd[1491]: time="2025-03-20T17:45:00.575862792Z" level=info msg="CreateContainer within sandbox \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 20 17:45:00.582959 containerd[1491]: time="2025-03-20T17:45:00.582919685Z" level=info msg="Container 6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:00.588179 containerd[1491]: time="2025-03-20T17:45:00.588084195Z" level=info msg="CreateContainer within sandbox \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\"" Mar 20 17:45:00.588561 containerd[1491]: time="2025-03-20T17:45:00.588477476Z" level=info msg="StartContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\"" Mar 20 17:45:00.589471 containerd[1491]: time="2025-03-20T17:45:00.589441933Z" level=info msg="connecting to shim 6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379" address="unix:///run/containerd/s/d78df5e956a2e6944b174dd9bd5375c845f1e47847d2495b276aed7a6d33bced" protocol=ttrpc version=3 Mar 20 17:45:00.609021 systemd[1]: Started cri-containerd-6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379.scope - libcontainer container 6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379. Mar 20 17:45:00.637943 containerd[1491]: time="2025-03-20T17:45:00.637418426Z" level=info msg="StartContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" returns successfully" Mar 20 17:45:00.648328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c-rootfs.mount: Deactivated successfully. Mar 20 17:45:01.059917 update_engine[1465]: I20250320 17:45:01.059286 1465 update_attempter.cc:509] Updating boot flags... Mar 20 17:45:01.087503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3186) Mar 20 17:45:01.138852 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3185) Mar 20 17:45:01.333153 containerd[1491]: time="2025-03-20T17:45:01.332458402Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 17:45:01.349193 containerd[1491]: time="2025-03-20T17:45:01.349148981Z" level=info msg="Container 62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:01.351775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717238433.mount: Deactivated successfully. Mar 20 17:45:01.359468 containerd[1491]: time="2025-03-20T17:45:01.359423547Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\"" Mar 20 17:45:01.362741 containerd[1491]: time="2025-03-20T17:45:01.362707375Z" level=info msg="StartContainer for \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\"" Mar 20 17:45:01.364501 containerd[1491]: time="2025-03-20T17:45:01.364433593Z" level=info msg="connecting to shim 62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" protocol=ttrpc version=3 Mar 20 17:45:01.365377 kubelet[2593]: I0320 17:45:01.365324 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dqfwc" podStartSLOduration=1.8547819749999999 podStartE2EDuration="9.365307885s" podCreationTimestamp="2025-03-20 17:44:52 +0000 UTC" firstStartedPulling="2025-03-20 17:44:53.064162601 +0000 UTC m=+5.889421220" lastFinishedPulling="2025-03-20 17:45:00.574688511 +0000 UTC m=+13.399947130" observedRunningTime="2025-03-20 17:45:01.337319245 +0000 UTC m=+14.162577864" watchObservedRunningTime="2025-03-20 17:45:01.365307885 +0000 UTC m=+14.190566504" Mar 20 17:45:01.386035 systemd[1]: Started cri-containerd-62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e.scope - libcontainer container 62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e. Mar 20 17:45:01.421637 containerd[1491]: time="2025-03-20T17:45:01.419412104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" id:\"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" pid:3205 exited_at:{seconds:1742492701 nanos:419020671}" Mar 20 17:45:01.421034 systemd[1]: cri-containerd-62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e.scope: Deactivated successfully. Mar 20 17:45:01.477128 containerd[1491]: time="2025-03-20T17:45:01.477081552Z" level=info msg="StartContainer for \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" returns successfully" Mar 20 17:45:01.488141 containerd[1491]: time="2025-03-20T17:45:01.488081128Z" level=info msg="received exit event container_id:\"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" id:\"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" pid:3205 exited_at:{seconds:1742492701 nanos:419020671}" Mar 20 17:45:01.506162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e-rootfs.mount: Deactivated successfully. Mar 20 17:45:02.342977 containerd[1491]: time="2025-03-20T17:45:02.342919568Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 17:45:02.355965 containerd[1491]: time="2025-03-20T17:45:02.355675340Z" level=info msg="Container 928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:02.365059 containerd[1491]: time="2025-03-20T17:45:02.365019589Z" level=info msg="CreateContainer within sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\"" Mar 20 17:45:02.365669 containerd[1491]: time="2025-03-20T17:45:02.365557655Z" level=info msg="StartContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\"" Mar 20 17:45:02.366633 containerd[1491]: time="2025-03-20T17:45:02.366602698Z" level=info msg="connecting to shim 928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666" address="unix:///run/containerd/s/42844817d096f9c49fc3492ab895b411b6411e5368f47919511226d8db701893" protocol=ttrpc version=3 Mar 20 17:45:02.393980 systemd[1]: Started cri-containerd-928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666.scope - libcontainer container 928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666. Mar 20 17:45:02.436940 containerd[1491]: time="2025-03-20T17:45:02.436906525Z" level=info msg="StartContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" returns successfully" Mar 20 17:45:02.556969 containerd[1491]: time="2025-03-20T17:45:02.556929449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" id:\"1a441f911369bccdd50a7408ecf2200f485f5f7c7e30477202d141a1c1e88349\" pid:3272 exited_at:{seconds:1742492702 nanos:556675540}" Mar 20 17:45:02.579947 kubelet[2593]: I0320 17:45:02.579899 2593 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 20 17:45:02.622690 systemd[1]: Created slice kubepods-burstable-pod0905a8cb_f347_41b1_9229_1943eb8b3033.slice - libcontainer container kubepods-burstable-pod0905a8cb_f347_41b1_9229_1943eb8b3033.slice. Mar 20 17:45:02.627630 systemd[1]: Created slice kubepods-burstable-pod49945839_45b0_442a_b53e_b1bebfc63af6.slice - libcontainer container kubepods-burstable-pod49945839_45b0_442a_b53e_b1bebfc63af6.slice. Mar 20 17:45:02.751569 kubelet[2593]: I0320 17:45:02.751416 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mmn6\" (UniqueName: \"kubernetes.io/projected/0905a8cb-f347-41b1-9229-1943eb8b3033-kube-api-access-2mmn6\") pod \"coredns-6f6b679f8f-blpnb\" (UID: \"0905a8cb-f347-41b1-9229-1943eb8b3033\") " pod="kube-system/coredns-6f6b679f8f-blpnb" Mar 20 17:45:02.751569 kubelet[2593]: I0320 17:45:02.751458 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7v6\" (UniqueName: \"kubernetes.io/projected/49945839-45b0-442a-b53e-b1bebfc63af6-kube-api-access-dh7v6\") pod \"coredns-6f6b679f8f-jspxm\" (UID: \"49945839-45b0-442a-b53e-b1bebfc63af6\") " pod="kube-system/coredns-6f6b679f8f-jspxm" Mar 20 17:45:02.751569 kubelet[2593]: I0320 17:45:02.751480 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49945839-45b0-442a-b53e-b1bebfc63af6-config-volume\") pod \"coredns-6f6b679f8f-jspxm\" (UID: \"49945839-45b0-442a-b53e-b1bebfc63af6\") " pod="kube-system/coredns-6f6b679f8f-jspxm" Mar 20 17:45:02.751569 kubelet[2593]: I0320 17:45:02.751497 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0905a8cb-f347-41b1-9229-1943eb8b3033-config-volume\") pod \"coredns-6f6b679f8f-blpnb\" (UID: \"0905a8cb-f347-41b1-9229-1943eb8b3033\") " pod="kube-system/coredns-6f6b679f8f-blpnb" Mar 20 17:45:02.927249 containerd[1491]: time="2025-03-20T17:45:02.927143605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-blpnb,Uid:0905a8cb-f347-41b1-9229-1943eb8b3033,Namespace:kube-system,Attempt:0,}" Mar 20 17:45:02.934690 containerd[1491]: time="2025-03-20T17:45:02.934381484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jspxm,Uid:49945839-45b0-442a-b53e-b1bebfc63af6,Namespace:kube-system,Attempt:0,}" Mar 20 17:45:03.362634 kubelet[2593]: I0320 17:45:03.362567 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6qk8v" podStartSLOduration=5.526422879 podStartE2EDuration="11.362551889s" podCreationTimestamp="2025-03-20 17:44:52 +0000 UTC" firstStartedPulling="2025-03-20 17:44:52.754595851 +0000 UTC m=+5.579854470" lastFinishedPulling="2025-03-20 17:44:58.590724901 +0000 UTC m=+11.415983480" observedRunningTime="2025-03-20 17:45:03.362060965 +0000 UTC m=+16.187319584" watchObservedRunningTime="2025-03-20 17:45:03.362551889 +0000 UTC m=+16.187810508" Mar 20 17:45:04.633848 systemd-networkd[1410]: cilium_host: Link UP Mar 20 17:45:04.633966 systemd-networkd[1410]: cilium_net: Link UP Mar 20 17:45:04.634093 systemd-networkd[1410]: cilium_net: Gained carrier Mar 20 17:45:04.634263 systemd-networkd[1410]: cilium_host: Gained carrier Mar 20 17:45:04.710348 systemd-networkd[1410]: cilium_vxlan: Link UP Mar 20 17:45:04.710354 systemd-networkd[1410]: cilium_vxlan: Gained carrier Mar 20 17:45:05.036869 kernel: NET: Registered PF_ALG protocol family Mar 20 17:45:05.177950 systemd-networkd[1410]: cilium_net: Gained IPv6LL Mar 20 17:45:05.178233 systemd-networkd[1410]: cilium_host: Gained IPv6LL Mar 20 17:45:05.593739 systemd-networkd[1410]: lxc_health: Link UP Mar 20 17:45:05.599099 systemd-networkd[1410]: lxc_health: Gained carrier Mar 20 17:45:06.055848 kernel: eth0: renamed from tmp4b85c Mar 20 17:45:06.069330 systemd-networkd[1410]: lxc0090db5839e2: Link UP Mar 20 17:45:06.070702 systemd-networkd[1410]: lxc0090db5839e2: Gained carrier Mar 20 17:45:06.070957 systemd-networkd[1410]: lxca4fc117c114e: Link UP Mar 20 17:45:06.078054 kernel: eth0: renamed from tmp4892f Mar 20 17:45:06.084288 systemd-networkd[1410]: lxca4fc117c114e: Gained carrier Mar 20 17:45:06.457970 systemd-networkd[1410]: cilium_vxlan: Gained IPv6LL Mar 20 17:45:07.289984 systemd-networkd[1410]: lxc0090db5839e2: Gained IPv6LL Mar 20 17:45:07.354057 systemd-networkd[1410]: lxc_health: Gained IPv6LL Mar 20 17:45:07.801977 systemd-networkd[1410]: lxca4fc117c114e: Gained IPv6LL Mar 20 17:45:09.547760 kubelet[2593]: I0320 17:45:09.547716 2593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 17:45:09.601564 containerd[1491]: time="2025-03-20T17:45:09.601037236Z" level=info msg="connecting to shim 4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9" address="unix:///run/containerd/s/b3586b2fa0c44c37576f9a39e386300e4be69145bcfdbbea7f46e85dc108d9db" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:45:09.601564 containerd[1491]: time="2025-03-20T17:45:09.601120531Z" level=info msg="connecting to shim 4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168" address="unix:///run/containerd/s/ea48b24ba4aa7bee9654c70925e797b0faa264585c30fd9cad13049a45c74e4e" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:45:09.633002 systemd[1]: Started cri-containerd-4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9.scope - libcontainer container 4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9. Mar 20 17:45:09.634487 systemd[1]: Started cri-containerd-4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168.scope - libcontainer container 4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168. Mar 20 17:45:09.645001 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 17:45:09.649147 systemd-resolved[1321]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 20 17:45:09.665241 containerd[1491]: time="2025-03-20T17:45:09.665186288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jspxm,Uid:49945839-45b0-442a-b53e-b1bebfc63af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9\"" Mar 20 17:45:09.672106 containerd[1491]: time="2025-03-20T17:45:09.671753260Z" level=info msg="CreateContainer within sandbox \"4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 17:45:09.672106 containerd[1491]: time="2025-03-20T17:45:09.671992141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-blpnb,Uid:0905a8cb-f347-41b1-9229-1943eb8b3033,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168\"" Mar 20 17:45:09.675246 containerd[1491]: time="2025-03-20T17:45:09.675174489Z" level=info msg="CreateContainer within sandbox \"4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 20 17:45:09.682764 containerd[1491]: time="2025-03-20T17:45:09.682733071Z" level=info msg="Container 704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:09.687179 containerd[1491]: time="2025-03-20T17:45:09.687130749Z" level=info msg="Container 719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:45:09.690546 containerd[1491]: time="2025-03-20T17:45:09.690508411Z" level=info msg="CreateContainer within sandbox \"4892f0e1c0d724bbe4ce9179165c9f6d5536afa71f97c6b5ba92e6b130587ff9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e\"" Mar 20 17:45:09.691072 containerd[1491]: time="2025-03-20T17:45:09.691035862Z" level=info msg="StartContainer for \"704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e\"" Mar 20 17:45:09.692423 containerd[1491]: time="2025-03-20T17:45:09.692393095Z" level=info msg="connecting to shim 704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e" address="unix:///run/containerd/s/b3586b2fa0c44c37576f9a39e386300e4be69145bcfdbbea7f46e85dc108d9db" protocol=ttrpc version=3 Mar 20 17:45:09.693470 containerd[1491]: time="2025-03-20T17:45:09.693363623Z" level=info msg="CreateContainer within sandbox \"4b85c4080a878ed161dc58e2b6c327e3add6699e45692bc585895a93f2660168\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3\"" Mar 20 17:45:09.694925 containerd[1491]: time="2025-03-20T17:45:09.694713295Z" level=info msg="StartContainer for \"719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3\"" Mar 20 17:45:09.701032 containerd[1491]: time="2025-03-20T17:45:09.699877705Z" level=info msg="connecting to shim 719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3" address="unix:///run/containerd/s/ea48b24ba4aa7bee9654c70925e797b0faa264585c30fd9cad13049a45c74e4e" protocol=ttrpc version=3 Mar 20 17:45:09.711986 systemd[1]: Started cri-containerd-704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e.scope - libcontainer container 704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e. Mar 20 17:45:09.714582 systemd[1]: Started cri-containerd-719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3.scope - libcontainer container 719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3. Mar 20 17:45:09.745376 containerd[1491]: time="2025-03-20T17:45:09.745337537Z" level=info msg="StartContainer for \"704bff3f5a733175780e0580605a0ee7e007c7a2b5cf04fdf4552e52cba1001e\" returns successfully" Mar 20 17:45:09.759459 containerd[1491]: time="2025-03-20T17:45:09.759422123Z" level=info msg="StartContainer for \"719b085871cce6fdbcbdfe21f732cfb432bcbdae681ee02ae2107574031d9ad3\" returns successfully" Mar 20 17:45:10.391490 kubelet[2593]: I0320 17:45:10.391399 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-blpnb" podStartSLOduration=18.391383075 podStartE2EDuration="18.391383075s" podCreationTimestamp="2025-03-20 17:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:45:10.391254414 +0000 UTC m=+23.216513033" watchObservedRunningTime="2025-03-20 17:45:10.391383075 +0000 UTC m=+23.216641654" Mar 20 17:45:10.391490 kubelet[2593]: I0320 17:45:10.391487 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jspxm" podStartSLOduration=18.391483011 podStartE2EDuration="18.391483011s" podCreationTimestamp="2025-03-20 17:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:45:10.381987837 +0000 UTC m=+23.207246456" watchObservedRunningTime="2025-03-20 17:45:10.391483011 +0000 UTC m=+23.216741630" Mar 20 17:45:14.840149 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:60272.service - OpenSSH per-connection server daemon (10.0.0.1:60272). Mar 20 17:45:14.898513 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 60272 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:14.899864 sshd-session[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:14.903888 systemd-logind[1463]: New session 8 of user core. Mar 20 17:45:14.919034 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 20 17:45:15.045539 sshd[3926]: Connection closed by 10.0.0.1 port 60272 Mar 20 17:45:15.045879 sshd-session[3924]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:15.049079 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:60272.service: Deactivated successfully. Mar 20 17:45:15.051227 systemd[1]: session-8.scope: Deactivated successfully. Mar 20 17:45:15.052217 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Mar 20 17:45:15.053137 systemd-logind[1463]: Removed session 8. Mar 20 17:45:20.058532 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:60276.service - OpenSSH per-connection server daemon (10.0.0.1:60276). Mar 20 17:45:20.111932 sshd[3940]: Accepted publickey for core from 10.0.0.1 port 60276 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:20.113271 sshd-session[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:20.117731 systemd-logind[1463]: New session 9 of user core. Mar 20 17:45:20.126991 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 20 17:45:20.238893 sshd[3942]: Connection closed by 10.0.0.1 port 60276 Mar 20 17:45:20.239428 sshd-session[3940]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:20.242626 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:60276.service: Deactivated successfully. Mar 20 17:45:20.244371 systemd[1]: session-9.scope: Deactivated successfully. Mar 20 17:45:20.245047 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Mar 20 17:45:20.245930 systemd-logind[1463]: Removed session 9. Mar 20 17:45:25.255693 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:55700.service - OpenSSH per-connection server daemon (10.0.0.1:55700). Mar 20 17:45:25.311663 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 55700 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:25.312805 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:25.317342 systemd-logind[1463]: New session 10 of user core. Mar 20 17:45:25.321019 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 20 17:45:25.439817 sshd[3961]: Connection closed by 10.0.0.1 port 55700 Mar 20 17:45:25.439672 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:25.443356 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:55700.service: Deactivated successfully. Mar 20 17:45:25.445203 systemd[1]: session-10.scope: Deactivated successfully. Mar 20 17:45:25.446002 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Mar 20 17:45:25.446851 systemd-logind[1463]: Removed session 10. Mar 20 17:45:30.451417 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:55704.service - OpenSSH per-connection server daemon (10.0.0.1:55704). Mar 20 17:45:30.507737 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 55704 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:30.508968 sshd-session[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:30.513672 systemd-logind[1463]: New session 11 of user core. Mar 20 17:45:30.529012 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 20 17:45:30.637702 sshd[3978]: Connection closed by 10.0.0.1 port 55704 Mar 20 17:45:30.639029 sshd-session[3976]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:30.642397 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Mar 20 17:45:30.642604 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:55704.service: Deactivated successfully. Mar 20 17:45:30.646941 systemd[1]: session-11.scope: Deactivated successfully. Mar 20 17:45:30.648109 systemd-logind[1463]: Removed session 11. Mar 20 17:45:35.656742 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:54556.service - OpenSSH per-connection server daemon (10.0.0.1:54556). Mar 20 17:45:35.712971 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 54556 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:35.714274 sshd-session[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:35.721728 systemd-logind[1463]: New session 12 of user core. Mar 20 17:45:35.732025 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 20 17:45:35.859295 sshd[3995]: Connection closed by 10.0.0.1 port 54556 Mar 20 17:45:35.859922 sshd-session[3993]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:35.876454 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:54556.service: Deactivated successfully. Mar 20 17:45:35.878523 systemd[1]: session-12.scope: Deactivated successfully. Mar 20 17:45:35.881448 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Mar 20 17:45:35.882618 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:54558.service - OpenSSH per-connection server daemon (10.0.0.1:54558). Mar 20 17:45:35.886391 systemd-logind[1463]: Removed session 12. Mar 20 17:45:35.936302 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 54558 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:35.937972 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:35.942481 systemd-logind[1463]: New session 13 of user core. Mar 20 17:45:35.954106 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 20 17:45:36.126178 sshd[4011]: Connection closed by 10.0.0.1 port 54558 Mar 20 17:45:36.127206 sshd-session[4008]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:36.140791 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:54558.service: Deactivated successfully. Mar 20 17:45:36.145791 systemd[1]: session-13.scope: Deactivated successfully. Mar 20 17:45:36.147917 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Mar 20 17:45:36.153303 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:54560.service - OpenSSH per-connection server daemon (10.0.0.1:54560). Mar 20 17:45:36.159122 systemd-logind[1463]: Removed session 13. Mar 20 17:45:36.215790 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 54560 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:36.217203 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:36.221905 systemd-logind[1463]: New session 14 of user core. Mar 20 17:45:36.231996 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 20 17:45:36.346151 sshd[4025]: Connection closed by 10.0.0.1 port 54560 Mar 20 17:45:36.345020 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:36.348048 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:54560.service: Deactivated successfully. Mar 20 17:45:36.350578 systemd[1]: session-14.scope: Deactivated successfully. Mar 20 17:45:36.352517 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Mar 20 17:45:36.354069 systemd-logind[1463]: Removed session 14. Mar 20 17:45:41.356418 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:54568.service - OpenSSH per-connection server daemon (10.0.0.1:54568). Mar 20 17:45:41.415539 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 54568 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:41.416719 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:41.421377 systemd-logind[1463]: New session 15 of user core. Mar 20 17:45:41.426963 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 20 17:45:41.532877 sshd[4041]: Connection closed by 10.0.0.1 port 54568 Mar 20 17:45:41.533460 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:41.537179 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:54568.service: Deactivated successfully. Mar 20 17:45:41.538812 systemd[1]: session-15.scope: Deactivated successfully. Mar 20 17:45:41.540016 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Mar 20 17:45:41.540895 systemd-logind[1463]: Removed session 15. Mar 20 17:45:46.545357 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). Mar 20 17:45:46.597182 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:46.598447 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:46.602862 systemd-logind[1463]: New session 16 of user core. Mar 20 17:45:46.608957 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 20 17:45:46.718488 sshd[4057]: Connection closed by 10.0.0.1 port 43016 Mar 20 17:45:46.718882 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:46.729956 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:43016.service: Deactivated successfully. Mar 20 17:45:46.731588 systemd[1]: session-16.scope: Deactivated successfully. Mar 20 17:45:46.733128 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Mar 20 17:45:46.734387 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:43020.service - OpenSSH per-connection server daemon (10.0.0.1:43020). Mar 20 17:45:46.735480 systemd-logind[1463]: Removed session 16. Mar 20 17:45:46.787989 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:46.789206 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:46.793906 systemd-logind[1463]: New session 17 of user core. Mar 20 17:45:46.801004 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 20 17:45:47.049987 sshd[4073]: Connection closed by 10.0.0.1 port 43020 Mar 20 17:45:47.050787 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:47.063228 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:43020.service: Deactivated successfully. Mar 20 17:45:47.064936 systemd[1]: session-17.scope: Deactivated successfully. Mar 20 17:45:47.065610 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Mar 20 17:45:47.067564 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Mar 20 17:45:47.068385 systemd-logind[1463]: Removed session 17. Mar 20 17:45:47.129590 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:47.131518 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:47.135900 systemd-logind[1463]: New session 18 of user core. Mar 20 17:45:47.143998 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 20 17:45:48.459414 sshd[4087]: Connection closed by 10.0.0.1 port 43022 Mar 20 17:45:48.459969 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:48.471814 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:43022.service: Deactivated successfully. Mar 20 17:45:48.477722 systemd[1]: session-18.scope: Deactivated successfully. Mar 20 17:45:48.482055 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Mar 20 17:45:48.487107 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:43036.service - OpenSSH per-connection server daemon (10.0.0.1:43036). Mar 20 17:45:48.487651 systemd-logind[1463]: Removed session 18. Mar 20 17:45:48.544919 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 43036 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:48.546071 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:48.549801 systemd-logind[1463]: New session 19 of user core. Mar 20 17:45:48.560983 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 20 17:45:48.772777 sshd[4112]: Connection closed by 10.0.0.1 port 43036 Mar 20 17:45:48.773500 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:48.783330 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:43036.service: Deactivated successfully. Mar 20 17:45:48.784968 systemd[1]: session-19.scope: Deactivated successfully. Mar 20 17:45:48.786663 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Mar 20 17:45:48.788787 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:43048.service - OpenSSH per-connection server daemon (10.0.0.1:43048). Mar 20 17:45:48.790642 systemd-logind[1463]: Removed session 19. Mar 20 17:45:48.840482 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 43048 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:48.841841 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:48.846130 systemd-logind[1463]: New session 20 of user core. Mar 20 17:45:48.851003 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 20 17:45:48.963932 sshd[4126]: Connection closed by 10.0.0.1 port 43048 Mar 20 17:45:48.964471 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:48.967876 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:43048.service: Deactivated successfully. Mar 20 17:45:48.969730 systemd[1]: session-20.scope: Deactivated successfully. Mar 20 17:45:48.970471 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Mar 20 17:45:48.971471 systemd-logind[1463]: Removed session 20. Mar 20 17:45:53.976749 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:47748.service - OpenSSH per-connection server daemon (10.0.0.1:47748). Mar 20 17:45:54.032587 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 47748 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:54.033985 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:54.037891 systemd-logind[1463]: New session 21 of user core. Mar 20 17:45:54.053980 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 20 17:45:54.161684 sshd[4146]: Connection closed by 10.0.0.1 port 47748 Mar 20 17:45:54.162027 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:54.165594 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:47748.service: Deactivated successfully. Mar 20 17:45:54.167386 systemd[1]: session-21.scope: Deactivated successfully. Mar 20 17:45:54.168962 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Mar 20 17:45:54.169893 systemd-logind[1463]: Removed session 21. Mar 20 17:45:59.174489 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:47754.service - OpenSSH per-connection server daemon (10.0.0.1:47754). Mar 20 17:45:59.225293 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 47754 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:45:59.226536 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:45:59.230713 systemd-logind[1463]: New session 22 of user core. Mar 20 17:45:59.237991 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 20 17:45:59.342926 sshd[4162]: Connection closed by 10.0.0.1 port 47754 Mar 20 17:45:59.343250 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Mar 20 17:45:59.346701 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:47754.service: Deactivated successfully. Mar 20 17:45:59.348435 systemd[1]: session-22.scope: Deactivated successfully. Mar 20 17:45:59.349349 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Mar 20 17:45:59.350524 systemd-logind[1463]: Removed session 22. Mar 20 17:46:04.356667 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:50122.service - OpenSSH per-connection server daemon (10.0.0.1:50122). Mar 20 17:46:04.411518 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 50122 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:46:04.412607 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:46:04.416609 systemd-logind[1463]: New session 23 of user core. Mar 20 17:46:04.432040 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 20 17:46:04.541989 sshd[4177]: Connection closed by 10.0.0.1 port 50122 Mar 20 17:46:04.542531 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Mar 20 17:46:04.555051 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:50122.service: Deactivated successfully. Mar 20 17:46:04.556680 systemd[1]: session-23.scope: Deactivated successfully. Mar 20 17:46:04.558005 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Mar 20 17:46:04.559737 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:50134.service - OpenSSH per-connection server daemon (10.0.0.1:50134). Mar 20 17:46:04.560709 systemd-logind[1463]: Removed session 23. Mar 20 17:46:04.611461 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 50134 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:46:04.612520 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:46:04.616854 systemd-logind[1463]: New session 24 of user core. Mar 20 17:46:04.626975 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 20 17:46:06.239438 containerd[1491]: time="2025-03-20T17:46:06.239391257Z" level=info msg="StopContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" with timeout 30 (s)" Mar 20 17:46:06.240475 containerd[1491]: time="2025-03-20T17:46:06.240394070Z" level=info msg="Stop container \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" with signal terminated" Mar 20 17:46:06.253324 systemd[1]: cri-containerd-6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379.scope: Deactivated successfully. Mar 20 17:46:06.255685 containerd[1491]: time="2025-03-20T17:46:06.255638628Z" level=info msg="received exit event container_id:\"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" id:\"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" pid:3155 exited_at:{seconds:1742492766 nanos:255388625}" Mar 20 17:46:06.256761 containerd[1491]: time="2025-03-20T17:46:06.256359438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" id:\"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" pid:3155 exited_at:{seconds:1742492766 nanos:255388625}" Mar 20 17:46:06.283394 containerd[1491]: time="2025-03-20T17:46:06.283352630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" id:\"c000d26a6f7d2cc98405c42b3cb064b17f6de782a8be450d0635752979c2a82b\" pid:4220 exited_at:{seconds:1742492766 nanos:282871663}" Mar 20 17:46:06.285094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379-rootfs.mount: Deactivated successfully. Mar 20 17:46:06.286778 containerd[1491]: time="2025-03-20T17:46:06.286641432Z" level=info msg="StopContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" with timeout 2 (s)" Mar 20 17:46:06.287682 containerd[1491]: time="2025-03-20T17:46:06.287646206Z" level=info msg="Stop container \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" with signal terminated" Mar 20 17:46:06.293899 systemd-networkd[1410]: lxc_health: Link DOWN Mar 20 17:46:06.293905 systemd-networkd[1410]: lxc_health: Lost carrier Mar 20 17:46:06.304463 containerd[1491]: time="2025-03-20T17:46:06.304143140Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 20 17:46:06.304463 containerd[1491]: time="2025-03-20T17:46:06.304332823Z" level=info msg="StopContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" returns successfully" Mar 20 17:46:06.305187 containerd[1491]: time="2025-03-20T17:46:06.305160434Z" level=info msg="StopPodSandbox for \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\"" Mar 20 17:46:06.313033 systemd[1]: cri-containerd-928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666.scope: Deactivated successfully. Mar 20 17:46:06.313351 systemd[1]: cri-containerd-928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666.scope: Consumed 6.490s CPU time, 123.2M memory peak, 156K read from disk, 12.9M written to disk. Mar 20 17:46:06.314199 containerd[1491]: time="2025-03-20T17:46:06.314163391Z" level=info msg="received exit event container_id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" pid:3242 exited_at:{seconds:1742492766 nanos:313970989}" Mar 20 17:46:06.314476 containerd[1491]: time="2025-03-20T17:46:06.314440715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" id:\"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" pid:3242 exited_at:{seconds:1742492766 nanos:313970989}" Mar 20 17:46:06.319149 containerd[1491]: time="2025-03-20T17:46:06.319020294Z" level=info msg="Container to stop \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.333350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666-rootfs.mount: Deactivated successfully. Mar 20 17:46:06.334397 systemd[1]: cri-containerd-205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52.scope: Deactivated successfully. Mar 20 17:46:06.337972 containerd[1491]: time="2025-03-20T17:46:06.337707978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" id:\"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" pid:2863 exit_status:137 exited_at:{seconds:1742492766 nanos:337397054}" Mar 20 17:46:06.344396 containerd[1491]: time="2025-03-20T17:46:06.344360065Z" level=info msg="StopContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" returns successfully" Mar 20 17:46:06.344879 containerd[1491]: time="2025-03-20T17:46:06.344853991Z" level=info msg="StopPodSandbox for \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\"" Mar 20 17:46:06.344934 containerd[1491]: time="2025-03-20T17:46:06.344918832Z" level=info msg="Container to stop \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.344973 containerd[1491]: time="2025-03-20T17:46:06.344935512Z" level=info msg="Container to stop \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.344973 containerd[1491]: time="2025-03-20T17:46:06.344944912Z" level=info msg="Container to stop \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.344973 containerd[1491]: time="2025-03-20T17:46:06.344954152Z" level=info msg="Container to stop \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.344973 containerd[1491]: time="2025-03-20T17:46:06.344962112Z" level=info msg="Container to stop \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 20 17:46:06.350268 systemd[1]: cri-containerd-d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da.scope: Deactivated successfully. Mar 20 17:46:06.368066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52-rootfs.mount: Deactivated successfully. Mar 20 17:46:06.370493 containerd[1491]: time="2025-03-20T17:46:06.370455485Z" level=info msg="shim disconnected" id=205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52 namespace=k8s.io Mar 20 17:46:06.370667 containerd[1491]: time="2025-03-20T17:46:06.370488525Z" level=warning msg="cleaning up after shim disconnected" id=205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52 namespace=k8s.io Mar 20 17:46:06.370667 containerd[1491]: time="2025-03-20T17:46:06.370518365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 17:46:06.377748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da-rootfs.mount: Deactivated successfully. Mar 20 17:46:06.382124 containerd[1491]: time="2025-03-20T17:46:06.382036796Z" level=info msg="shim disconnected" id=d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da namespace=k8s.io Mar 20 17:46:06.382124 containerd[1491]: time="2025-03-20T17:46:06.382066836Z" level=warning msg="cleaning up after shim disconnected" id=d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da namespace=k8s.io Mar 20 17:46:06.382124 containerd[1491]: time="2025-03-20T17:46:06.382097316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 20 17:46:06.390081 containerd[1491]: time="2025-03-20T17:46:06.390037340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" id:\"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" pid:2741 exit_status:137 exited_at:{seconds:1742492766 nanos:351875963}" Mar 20 17:46:06.392011 containerd[1491]: time="2025-03-20T17:46:06.390329704Z" level=info msg="TearDown network for sandbox \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" successfully" Mar 20 17:46:06.392011 containerd[1491]: time="2025-03-20T17:46:06.390351424Z" level=info msg="StopPodSandbox for \"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" returns successfully" Mar 20 17:46:06.391875 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52-shm.mount: Deactivated successfully. Mar 20 17:46:06.391975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da-shm.mount: Deactivated successfully. Mar 20 17:46:06.402735 containerd[1491]: time="2025-03-20T17:46:06.401689172Z" level=info msg="received exit event sandbox_id:\"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" exit_status:137 exited_at:{seconds:1742492766 nanos:337397054}" Mar 20 17:46:06.402735 containerd[1491]: time="2025-03-20T17:46:06.402042336Z" level=info msg="received exit event sandbox_id:\"d6721752a481cdc4767be94fcd12789af278ad713e9efe695d1653c881e8a3da\" exit_status:137 exited_at:{seconds:1742492766 nanos:351875963}" Mar 20 17:46:06.408341 containerd[1491]: time="2025-03-20T17:46:06.408290858Z" level=info msg="TearDown network for sandbox \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" successfully" Mar 20 17:46:06.408341 containerd[1491]: time="2025-03-20T17:46:06.408330498Z" level=info msg="StopPodSandbox for \"205fc89c8557b483117c2f90710e1df412bd7dd9a4d1c1c844f5be546be1bc52\" returns successfully" Mar 20 17:46:06.430416 kubelet[2593]: I0320 17:46:06.430367 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-run\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.430416 kubelet[2593]: I0320 17:46:06.430413 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6jlj\" (UniqueName: \"kubernetes.io/projected/db01354d-8723-443e-ba90-acfdb0e66bd9-kube-api-access-l6jlj\") pod \"db01354d-8723-443e-ba90-acfdb0e66bd9\" (UID: \"db01354d-8723-443e-ba90-acfdb0e66bd9\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430436 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-xtables-lock\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430457 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cni-path\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430476 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldsjx\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-kube-api-access-ldsjx\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430492 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-net\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430507 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-kernel\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.431935 kubelet[2593]: I0320 17:46:06.430523 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-etc-cni-netd\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430540 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hubble-tls\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430553 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-lib-modules\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430570 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-clustermesh-secrets\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430588 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-config-path\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430604 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db01354d-8723-443e-ba90-acfdb0e66bd9-cilium-config-path\") pod \"db01354d-8723-443e-ba90-acfdb0e66bd9\" (UID: \"db01354d-8723-443e-ba90-acfdb0e66bd9\") " Mar 20 17:46:06.432122 kubelet[2593]: I0320 17:46:06.430624 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-cgroup\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432345 kubelet[2593]: I0320 17:46:06.430639 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-bpf-maps\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.432345 kubelet[2593]: I0320 17:46:06.430652 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hostproc\") pod \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\" (UID: \"d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea\") " Mar 20 17:46:06.441959 kubelet[2593]: I0320 17:46:06.441909 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.442217 kubelet[2593]: I0320 17:46:06.442176 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.442265 kubelet[2593]: I0320 17:46:06.442235 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.443687 kubelet[2593]: I0320 17:46:06.443468 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db01354d-8723-443e-ba90-acfdb0e66bd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db01354d-8723-443e-ba90-acfdb0e66bd9" (UID: "db01354d-8723-443e-ba90-acfdb0e66bd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:46:06.443687 kubelet[2593]: I0320 17:46:06.443527 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.443687 kubelet[2593]: I0320 17:46:06.443549 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.443687 kubelet[2593]: I0320 17:46:06.443565 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hostproc" (OuterVolumeSpecName: "hostproc") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.444990 kubelet[2593]: I0320 17:46:06.444942 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db01354d-8723-443e-ba90-acfdb0e66bd9-kube-api-access-l6jlj" (OuterVolumeSpecName: "kube-api-access-l6jlj") pod "db01354d-8723-443e-ba90-acfdb0e66bd9" (UID: "db01354d-8723-443e-ba90-acfdb0e66bd9"). InnerVolumeSpecName "kube-api-access-l6jlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:46:06.445062 kubelet[2593]: I0320 17:46:06.445003 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.445062 kubelet[2593]: I0320 17:46:06.445025 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.445062 kubelet[2593]: I0320 17:46:06.445043 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.445062 kubelet[2593]: I0320 17:46:06.445062 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cni-path" (OuterVolumeSpecName: "cni-path") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 17:46:06.445624 kubelet[2593]: I0320 17:46:06.445589 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:46:06.445938 kubelet[2593]: I0320 17:46:06.445899 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-kube-api-access-ldsjx" (OuterVolumeSpecName: "kube-api-access-ldsjx") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "kube-api-access-ldsjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 17:46:06.447020 kubelet[2593]: I0320 17:46:06.446898 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 17:46:06.447398 kubelet[2593]: I0320 17:46:06.447367 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" (UID: "d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 17:46:06.484729 kubelet[2593]: I0320 17:46:06.484703 2593 scope.go:117] "RemoveContainer" containerID="6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379" Mar 20 17:46:06.486518 containerd[1491]: time="2025-03-20T17:46:06.486475157Z" level=info msg="RemoveContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\"" Mar 20 17:46:06.492606 systemd[1]: Removed slice kubepods-besteffort-poddb01354d_8723_443e_ba90_acfdb0e66bd9.slice - libcontainer container kubepods-besteffort-poddb01354d_8723_443e_ba90_acfdb0e66bd9.slice. Mar 20 17:46:06.493511 containerd[1491]: time="2025-03-20T17:46:06.493476368Z" level=info msg="RemoveContainer for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" returns successfully" Mar 20 17:46:06.493718 kubelet[2593]: I0320 17:46:06.493690 2593 scope.go:117] "RemoveContainer" containerID="6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379" Mar 20 17:46:06.494461 containerd[1491]: time="2025-03-20T17:46:06.494423780Z" level=error msg="ContainerStatus for \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\": not found" Mar 20 17:46:06.496915 systemd[1]: Removed slice kubepods-burstable-podd1fa6893_72e2_4c5c_be2d_ab1ceb6877ea.slice - libcontainer container kubepods-burstable-podd1fa6893_72e2_4c5c_be2d_ab1ceb6877ea.slice. Mar 20 17:46:06.497015 systemd[1]: kubepods-burstable-podd1fa6893_72e2_4c5c_be2d_ab1ceb6877ea.slice: Consumed 6.628s CPU time, 123.5M memory peak, 176K read from disk, 16.1M written to disk. Mar 20 17:46:06.505554 kubelet[2593]: E0320 17:46:06.505367 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\": not found" containerID="6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379" Mar 20 17:46:06.505554 kubelet[2593]: I0320 17:46:06.505451 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379"} err="failed to get container status \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b319137ea75e56db644b049a1c6e3ee00e1d5f7970afe8596bf0b47e0408379\": not found" Mar 20 17:46:06.505685 kubelet[2593]: I0320 17:46:06.505592 2593 scope.go:117] "RemoveContainer" containerID="928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666" Mar 20 17:46:06.507745 containerd[1491]: time="2025-03-20T17:46:06.507711473Z" level=info msg="RemoveContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\"" Mar 20 17:46:06.515789 containerd[1491]: time="2025-03-20T17:46:06.515749378Z" level=info msg="RemoveContainer for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" returns successfully" Mar 20 17:46:06.516530 kubelet[2593]: I0320 17:46:06.516035 2593 scope.go:117] "RemoveContainer" containerID="62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e" Mar 20 17:46:06.517789 containerd[1491]: time="2025-03-20T17:46:06.517761924Z" level=info msg="RemoveContainer for \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\"" Mar 20 17:46:06.521520 containerd[1491]: time="2025-03-20T17:46:06.521396612Z" level=info msg="RemoveContainer for \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" returns successfully" Mar 20 17:46:06.521694 kubelet[2593]: I0320 17:46:06.521599 2593 scope.go:117] "RemoveContainer" containerID="34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c" Mar 20 17:46:06.525447 containerd[1491]: time="2025-03-20T17:46:06.525403024Z" level=info msg="RemoveContainer for \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\"" Mar 20 17:46:06.531705 kubelet[2593]: I0320 17:46:06.531676 2593 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531705 kubelet[2593]: I0320 17:46:06.531704 2593 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531705 kubelet[2593]: I0320 17:46:06.531713 2593 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531721 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db01354d-8723-443e-ba90-acfdb0e66bd9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531732 2593 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531741 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531749 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531756 2593 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531764 2593 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531771 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.531869 kubelet[2593]: I0320 17:46:06.531803 2593 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l6jlj\" (UniqueName: \"kubernetes.io/projected/db01354d-8723-443e-ba90-acfdb0e66bd9-kube-api-access-l6jlj\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532027 kubelet[2593]: I0320 17:46:06.531811 2593 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532027 kubelet[2593]: I0320 17:46:06.531818 2593 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532027 kubelet[2593]: I0320 17:46:06.531842 2593 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532027 kubelet[2593]: I0320 17:46:06.531849 2593 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ldsjx\" (UniqueName: \"kubernetes.io/projected/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-kube-api-access-ldsjx\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532027 kubelet[2593]: I0320 17:46:06.531856 2593 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 20 17:46:06.532502 containerd[1491]: time="2025-03-20T17:46:06.532470916Z" level=info msg="RemoveContainer for \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" returns successfully" Mar 20 17:46:06.532720 kubelet[2593]: I0320 17:46:06.532700 2593 scope.go:117] "RemoveContainer" containerID="6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76" Mar 20 17:46:06.534125 containerd[1491]: time="2025-03-20T17:46:06.534096577Z" level=info msg="RemoveContainer for \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\"" Mar 20 17:46:06.536773 containerd[1491]: time="2025-03-20T17:46:06.536674651Z" level=info msg="RemoveContainer for \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" returns successfully" Mar 20 17:46:06.537051 kubelet[2593]: I0320 17:46:06.536858 2593 scope.go:117] "RemoveContainer" containerID="4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a" Mar 20 17:46:06.538122 containerd[1491]: time="2025-03-20T17:46:06.538093709Z" level=info msg="RemoveContainer for \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\"" Mar 20 17:46:06.540555 containerd[1491]: time="2025-03-20T17:46:06.540524301Z" level=info msg="RemoveContainer for \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" returns successfully" Mar 20 17:46:06.541121 kubelet[2593]: I0320 17:46:06.540950 2593 scope.go:117] "RemoveContainer" containerID="928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666" Mar 20 17:46:06.541568 containerd[1491]: time="2025-03-20T17:46:06.541386752Z" level=error msg="ContainerStatus for \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\": not found" Mar 20 17:46:06.541818 kubelet[2593]: E0320 17:46:06.541780 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\": not found" containerID="928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666" Mar 20 17:46:06.542005 kubelet[2593]: I0320 17:46:06.541939 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666"} err="failed to get container status \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\": rpc error: code = NotFound desc = an error occurred when try to find container \"928b68c59726e3bd22cd9e152933a10aaf59e06b4e212b117f3228b0988af666\": not found" Mar 20 17:46:06.542005 kubelet[2593]: I0320 17:46:06.541970 2593 scope.go:117] "RemoveContainer" containerID="62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e" Mar 20 17:46:06.542962 containerd[1491]: time="2025-03-20T17:46:06.542924132Z" level=error msg="ContainerStatus for \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\": not found" Mar 20 17:46:06.543173 kubelet[2593]: E0320 17:46:06.543151 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\": not found" containerID="62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e" Mar 20 17:46:06.543370 kubelet[2593]: I0320 17:46:06.543210 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e"} err="failed to get container status \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\": rpc error: code = NotFound desc = an error occurred when try to find container \"62895be16a177935d37237c866af7928ba5dbc5aafeb71b4feb6de27e2bb840e\": not found" Mar 20 17:46:06.543370 kubelet[2593]: I0320 17:46:06.543229 2593 scope.go:117] "RemoveContainer" containerID="34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c" Mar 20 17:46:06.543652 containerd[1491]: time="2025-03-20T17:46:06.543556420Z" level=error msg="ContainerStatus for \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\": not found" Mar 20 17:46:06.544210 kubelet[2593]: E0320 17:46:06.543685 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\": not found" containerID="34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c" Mar 20 17:46:06.544275 kubelet[2593]: I0320 17:46:06.544232 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c"} err="failed to get container status \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\": rpc error: code = NotFound desc = an error occurred when try to find container \"34acf0d96eb152433a941272d9288657d5fee9bf3ea33e50acc05f85e5a1190c\": not found" Mar 20 17:46:06.544275 kubelet[2593]: I0320 17:46:06.544264 2593 scope.go:117] "RemoveContainer" containerID="6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76" Mar 20 17:46:06.544657 containerd[1491]: time="2025-03-20T17:46:06.544605914Z" level=error msg="ContainerStatus for \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\": not found" Mar 20 17:46:06.544931 kubelet[2593]: E0320 17:46:06.544905 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\": not found" containerID="6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76" Mar 20 17:46:06.545000 kubelet[2593]: I0320 17:46:06.544936 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76"} err="failed to get container status \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\": rpc error: code = NotFound desc = an error occurred when try to find container \"6254c4e29362cbdd71ced3dcb112f3f7f174b247f5ae31e9d6f902e298096f76\": not found" Mar 20 17:46:06.545000 kubelet[2593]: I0320 17:46:06.544957 2593 scope.go:117] "RemoveContainer" containerID="4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a" Mar 20 17:46:06.545170 containerd[1491]: time="2025-03-20T17:46:06.545138961Z" level=error msg="ContainerStatus for \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\": not found" Mar 20 17:46:06.545450 kubelet[2593]: E0320 17:46:06.545293 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\": not found" containerID="4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a" Mar 20 17:46:06.545450 kubelet[2593]: I0320 17:46:06.545331 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a"} err="failed to get container status \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bc1f61d3dadeb33194e6b4c9e773cdf169796c1b08a916547aa46fbdfb9eb0a\": not found" Mar 20 17:46:07.255721 kubelet[2593]: I0320 17:46:07.254859 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" path="/var/lib/kubelet/pods/d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea/volumes" Mar 20 17:46:07.255721 kubelet[2593]: I0320 17:46:07.255431 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db01354d-8723-443e-ba90-acfdb0e66bd9" path="/var/lib/kubelet/pods/db01354d-8723-443e-ba90-acfdb0e66bd9/volumes" Mar 20 17:46:07.284730 systemd[1]: var-lib-kubelet-pods-db01354d\x2d8723\x2d443e\x2dba90\x2dacfdb0e66bd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl6jlj.mount: Deactivated successfully. Mar 20 17:46:07.284852 systemd[1]: var-lib-kubelet-pods-d1fa6893\x2d72e2\x2d4c5c\x2dbe2d\x2dab1ceb6877ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dldsjx.mount: Deactivated successfully. Mar 20 17:46:07.284910 systemd[1]: var-lib-kubelet-pods-d1fa6893\x2d72e2\x2d4c5c\x2dbe2d\x2dab1ceb6877ea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 20 17:46:07.284970 systemd[1]: var-lib-kubelet-pods-d1fa6893\x2d72e2\x2d4c5c\x2dbe2d\x2dab1ceb6877ea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 20 17:46:07.323818 kubelet[2593]: E0320 17:46:07.323785 2593 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 17:46:08.202208 sshd[4192]: Connection closed by 10.0.0.1 port 50134 Mar 20 17:46:08.202579 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Mar 20 17:46:08.215076 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:50134.service: Deactivated successfully. Mar 20 17:46:08.216598 systemd[1]: session-24.scope: Deactivated successfully. Mar 20 17:46:08.218643 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Mar 20 17:46:08.221444 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:50136.service - OpenSSH per-connection server daemon (10.0.0.1:50136). Mar 20 17:46:08.222262 systemd-logind[1463]: Removed session 24. Mar 20 17:46:08.274061 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 50136 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:46:08.275130 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:46:08.278878 systemd-logind[1463]: New session 25 of user core. Mar 20 17:46:08.290959 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 20 17:46:09.099545 kubelet[2593]: I0320 17:46:09.099485 2593 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-20T17:46:09Z","lastTransitionTime":"2025-03-20T17:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 20 17:46:10.392029 sshd[4347]: Connection closed by 10.0.0.1 port 50136 Mar 20 17:46:10.392715 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Mar 20 17:46:10.404477 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:50136.service: Deactivated successfully. Mar 20 17:46:10.407301 systemd[1]: session-25.scope: Deactivated successfully. Mar 20 17:46:10.407575 systemd[1]: session-25.scope: Consumed 2.017s CPU time, 26.4M memory peak. Mar 20 17:46:10.409870 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409697 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="apply-sysctl-overwrites" Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409946 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db01354d-8723-443e-ba90-acfdb0e66bd9" containerName="cilium-operator" Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409955 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="cilium-agent" Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409962 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="mount-cgroup" Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409968 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="clean-cilium-state" Mar 20 17:46:10.410378 kubelet[2593]: E0320 17:46:10.409987 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="mount-bpf-fs" Mar 20 17:46:10.410378 kubelet[2593]: I0320 17:46:10.410015 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fa6893-72e2-4c5c-be2d-ab1ceb6877ea" containerName="cilium-agent" Mar 20 17:46:10.410378 kubelet[2593]: I0320 17:46:10.410023 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="db01354d-8723-443e-ba90-acfdb0e66bd9" containerName="cilium-operator" Mar 20 17:46:10.414153 systemd[1]: Started sshd@25-10.0.0.10:22-10.0.0.1:50146.service - OpenSSH per-connection server daemon (10.0.0.1:50146). Mar 20 17:46:10.417171 systemd-logind[1463]: Removed session 25. Mar 20 17:46:10.434200 systemd[1]: Created slice kubepods-burstable-pod73717201_8e22_4196_a192_e181ebf3567e.slice - libcontainer container kubepods-burstable-pod73717201_8e22_4196_a192_e181ebf3567e.slice. Mar 20 17:46:10.452780 kubelet[2593]: I0320 17:46:10.452724 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-cni-path\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452780 kubelet[2593]: I0320 17:46:10.452770 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73717201-8e22-4196-a192-e181ebf3567e-cilium-config-path\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452780 kubelet[2593]: I0320 17:46:10.452788 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-bpf-maps\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452806 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-xtables-lock\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452838 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-cilium-run\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452853 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-cilium-cgroup\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452869 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-hostproc\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452886 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-etc-cni-netd\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.452965 kubelet[2593]: I0320 17:46:10.452903 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73717201-8e22-4196-a192-e181ebf3567e-clustermesh-secrets\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453100 kubelet[2593]: I0320 17:46:10.452918 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73717201-8e22-4196-a192-e181ebf3567e-hubble-tls\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453100 kubelet[2593]: I0320 17:46:10.452934 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7l9\" (UniqueName: \"kubernetes.io/projected/73717201-8e22-4196-a192-e181ebf3567e-kube-api-access-5q7l9\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453100 kubelet[2593]: I0320 17:46:10.452951 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-lib-modules\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453100 kubelet[2593]: I0320 17:46:10.452967 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73717201-8e22-4196-a192-e181ebf3567e-cilium-ipsec-secrets\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453100 kubelet[2593]: I0320 17:46:10.452984 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-host-proc-sys-net\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.453194 kubelet[2593]: I0320 17:46:10.453000 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73717201-8e22-4196-a192-e181ebf3567e-host-proc-sys-kernel\") pod \"cilium-r4dcd\" (UID: \"73717201-8e22-4196-a192-e181ebf3567e\") " pod="kube-system/cilium-r4dcd" Mar 20 17:46:10.489291 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 50146 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:46:10.490635 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:46:10.494579 systemd-logind[1463]: New session 26 of user core. Mar 20 17:46:10.509015 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 20 17:46:10.562340 sshd[4363]: Connection closed by 10.0.0.1 port 50146 Mar 20 17:46:10.563748 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Mar 20 17:46:10.581944 systemd[1]: sshd@25-10.0.0.10:22-10.0.0.1:50146.service: Deactivated successfully. Mar 20 17:46:10.583676 systemd[1]: session-26.scope: Deactivated successfully. Mar 20 17:46:10.585107 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Mar 20 17:46:10.587370 systemd[1]: Started sshd@26-10.0.0.10:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Mar 20 17:46:10.588103 systemd-logind[1463]: Removed session 26. Mar 20 17:46:10.647423 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:5INvQ+AMoxUEAMpsPBJHVEmzjRKBHHiGaLqk69aAF2o Mar 20 17:46:10.648617 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 20 17:46:10.653064 systemd-logind[1463]: New session 27 of user core. Mar 20 17:46:10.665973 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 20 17:46:10.737699 containerd[1491]: time="2025-03-20T17:46:10.737658472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4dcd,Uid:73717201-8e22-4196-a192-e181ebf3567e,Namespace:kube-system,Attempt:0,}" Mar 20 17:46:10.761024 containerd[1491]: time="2025-03-20T17:46:10.760975632Z" level=info msg="connecting to shim 3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" namespace=k8s.io protocol=ttrpc version=3 Mar 20 17:46:10.799065 systemd[1]: Started cri-containerd-3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3.scope - libcontainer container 3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3. Mar 20 17:46:10.830635 containerd[1491]: time="2025-03-20T17:46:10.830579988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4dcd,Uid:73717201-8e22-4196-a192-e181ebf3567e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\"" Mar 20 17:46:10.832953 containerd[1491]: time="2025-03-20T17:46:10.832917096Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 20 17:46:10.883433 containerd[1491]: time="2025-03-20T17:46:10.883370902Z" level=info msg="Container 1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:46:10.889287 containerd[1491]: time="2025-03-20T17:46:10.889241973Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\"" Mar 20 17:46:10.889830 containerd[1491]: time="2025-03-20T17:46:10.889786739Z" level=info msg="StartContainer for \"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\"" Mar 20 17:46:10.890613 containerd[1491]: time="2025-03-20T17:46:10.890578349Z" level=info msg="connecting to shim 1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" protocol=ttrpc version=3 Mar 20 17:46:10.912002 systemd[1]: Started cri-containerd-1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e.scope - libcontainer container 1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e. Mar 20 17:46:10.939019 containerd[1491]: time="2025-03-20T17:46:10.938908209Z" level=info msg="StartContainer for \"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\" returns successfully" Mar 20 17:46:10.950298 systemd[1]: cri-containerd-1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e.scope: Deactivated successfully. Mar 20 17:46:10.954374 containerd[1491]: time="2025-03-20T17:46:10.954327794Z" level=info msg="received exit event container_id:\"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\" id:\"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\" pid:4440 exited_at:{seconds:1742492770 nanos:954037551}" Mar 20 17:46:10.954463 containerd[1491]: time="2025-03-20T17:46:10.954411515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\" id:\"1649cd8fb40f7693b430c8f63523e92187d1a4efb51c55cf4e15e032e1a3487e\" pid:4440 exited_at:{seconds:1742492770 nanos:954037551}" Mar 20 17:46:11.505940 containerd[1491]: time="2025-03-20T17:46:11.505895182Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 20 17:46:11.515486 containerd[1491]: time="2025-03-20T17:46:11.515433534Z" level=info msg="Container 4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:46:11.521693 containerd[1491]: time="2025-03-20T17:46:11.521641127Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\"" Mar 20 17:46:11.522233 containerd[1491]: time="2025-03-20T17:46:11.522206574Z" level=info msg="StartContainer for \"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\"" Mar 20 17:46:11.523070 containerd[1491]: time="2025-03-20T17:46:11.523046903Z" level=info msg="connecting to shim 4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" protocol=ttrpc version=3 Mar 20 17:46:11.544006 systemd[1]: Started cri-containerd-4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e.scope - libcontainer container 4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e. Mar 20 17:46:11.583161 systemd[1]: cri-containerd-4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e.scope: Deactivated successfully. Mar 20 17:46:11.584027 containerd[1491]: time="2025-03-20T17:46:11.583803179Z" level=info msg="received exit event container_id:\"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\" id:\"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\" pid:4488 exited_at:{seconds:1742492771 nanos:583605137}" Mar 20 17:46:11.587609 containerd[1491]: time="2025-03-20T17:46:11.587426742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\" id:\"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\" pid:4488 exited_at:{seconds:1742492771 nanos:583605137}" Mar 20 17:46:11.597291 containerd[1491]: time="2025-03-20T17:46:11.597248297Z" level=info msg="StartContainer for \"4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e\" returns successfully" Mar 20 17:46:11.611089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e56619fde31400fd13af40bccc01a8a95e192e4fd3b2b918d49385afb11e72e-rootfs.mount: Deactivated successfully. Mar 20 17:46:12.325641 kubelet[2593]: E0320 17:46:12.325602 2593 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 20 17:46:12.511108 containerd[1491]: time="2025-03-20T17:46:12.510930619Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 20 17:46:12.532191 containerd[1491]: time="2025-03-20T17:46:12.531053172Z" level=info msg="Container cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:46:12.541269 containerd[1491]: time="2025-03-20T17:46:12.541222809Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\"" Mar 20 17:46:12.545789 containerd[1491]: time="2025-03-20T17:46:12.543482275Z" level=info msg="StartContainer for \"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\"" Mar 20 17:46:12.545789 containerd[1491]: time="2025-03-20T17:46:12.544861811Z" level=info msg="connecting to shim cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" protocol=ttrpc version=3 Mar 20 17:46:12.585201 systemd[1]: Started cri-containerd-cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1.scope - libcontainer container cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1. Mar 20 17:46:12.625261 containerd[1491]: time="2025-03-20T17:46:12.625149618Z" level=info msg="StartContainer for \"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\" returns successfully" Mar 20 17:46:12.626929 systemd[1]: cri-containerd-cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1.scope: Deactivated successfully. Mar 20 17:46:12.629480 containerd[1491]: time="2025-03-20T17:46:12.629338306Z" level=info msg="received exit event container_id:\"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\" id:\"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\" pid:4532 exited_at:{seconds:1742492772 nanos:629128384}" Mar 20 17:46:12.629480 containerd[1491]: time="2025-03-20T17:46:12.629387627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\" id:\"cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1\" pid:4532 exited_at:{seconds:1742492772 nanos:629128384}" Mar 20 17:46:12.646883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cce54697054c3bf29a55eac685571083ce248a3a8a8162195d3399863088c0a1-rootfs.mount: Deactivated successfully. Mar 20 17:46:13.514111 containerd[1491]: time="2025-03-20T17:46:13.513002036Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 20 17:46:13.519847 containerd[1491]: time="2025-03-20T17:46:13.519492590Z" level=info msg="Container 89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:46:13.528480 containerd[1491]: time="2025-03-20T17:46:13.528233409Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\"" Mar 20 17:46:13.531699 containerd[1491]: time="2025-03-20T17:46:13.530586795Z" level=info msg="StartContainer for \"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\"" Mar 20 17:46:13.531699 containerd[1491]: time="2025-03-20T17:46:13.531415525Z" level=info msg="connecting to shim 89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" protocol=ttrpc version=3 Mar 20 17:46:13.551980 systemd[1]: Started cri-containerd-89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805.scope - libcontainer container 89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805. Mar 20 17:46:13.585063 systemd[1]: cri-containerd-89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805.scope: Deactivated successfully. Mar 20 17:46:13.586836 containerd[1491]: time="2025-03-20T17:46:13.585467217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\" id:\"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\" pid:4570 exited_at:{seconds:1742492773 nanos:585121093}" Mar 20 17:46:13.588499 containerd[1491]: time="2025-03-20T17:46:13.588358450Z" level=info msg="received exit event container_id:\"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\" id:\"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\" pid:4570 exited_at:{seconds:1742492773 nanos:585121093}" Mar 20 17:46:13.594841 containerd[1491]: time="2025-03-20T17:46:13.594726442Z" level=info msg="StartContainer for \"89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805\" returns successfully" Mar 20 17:46:13.605051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89c5428c3c2019aa99dadadda9f4a703d20a6e4cd2eb47fdeb3eab412ba51805-rootfs.mount: Deactivated successfully. Mar 20 17:46:14.517652 containerd[1491]: time="2025-03-20T17:46:14.517613223Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 20 17:46:14.540144 containerd[1491]: time="2025-03-20T17:46:14.537859928Z" level=info msg="Container fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544: CDI devices from CRI Config.CDIDevices: []" Mar 20 17:46:14.539946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332800451.mount: Deactivated successfully. Mar 20 17:46:14.542435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251971837.mount: Deactivated successfully. Mar 20 17:46:14.550421 containerd[1491]: time="2025-03-20T17:46:14.550280626Z" level=info msg="CreateContainer within sandbox \"3d99e3e1e30982581aa2f08a404e2714ac89e845b31221d0aa061b8741d5f5d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\"" Mar 20 17:46:14.550942 containerd[1491]: time="2025-03-20T17:46:14.550920153Z" level=info msg="StartContainer for \"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\"" Mar 20 17:46:14.551951 containerd[1491]: time="2025-03-20T17:46:14.551885083Z" level=info msg="connecting to shim fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544" address="unix:///run/containerd/s/b01e1fbd9d3d5d6652618f5ff74f50cd8d35b17b01ddd49f010d31a26114d3c8" protocol=ttrpc version=3 Mar 20 17:46:14.576018 systemd[1]: Started cri-containerd-fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544.scope - libcontainer container fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544. Mar 20 17:46:14.608752 containerd[1491]: time="2025-03-20T17:46:14.608699195Z" level=info msg="StartContainer for \"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" returns successfully" Mar 20 17:46:14.662586 containerd[1491]: time="2025-03-20T17:46:14.662544353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" id:\"a5d76c48f5dbebe49d8c42f7a562ebd038310820f95dd9c238bae815347a75b5\" pid:4638 exited_at:{seconds:1742492774 nanos:662236109}" Mar 20 17:46:14.936855 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 20 17:46:15.536772 kubelet[2593]: I0320 17:46:15.536693 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r4dcd" podStartSLOduration=5.536660434 podStartE2EDuration="5.536660434s" podCreationTimestamp="2025-03-20 17:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-20 17:46:15.53632627 +0000 UTC m=+88.361584969" watchObservedRunningTime="2025-03-20 17:46:15.536660434 +0000 UTC m=+88.361919053" Mar 20 17:46:17.032302 containerd[1491]: time="2025-03-20T17:46:17.032257447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" id:\"90b24ff65368fa854e42fffcf55a06b16b419613ceb2be91d4acce2ec3fefa6d\" pid:4917 exit_status:1 exited_at:{seconds:1742492777 nanos:31959284}" Mar 20 17:46:17.044355 kubelet[2593]: E0320 17:46:17.043948 2593 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40298->127.0.0.1:42425: write tcp 127.0.0.1:40298->127.0.0.1:42425: write: broken pipe Mar 20 17:46:17.821431 systemd-networkd[1410]: lxc_health: Link UP Mar 20 17:46:17.833759 systemd-networkd[1410]: lxc_health: Gained carrier Mar 20 17:46:19.141532 containerd[1491]: time="2025-03-20T17:46:19.141470727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" id:\"ba7300db27cfcef9723f0a89b358316489af44d80739a8067da2c17059f6d3fd\" pid:5176 exited_at:{seconds:1742492779 nanos:141182485}" Mar 20 17:46:19.226033 systemd-networkd[1410]: lxc_health: Gained IPv6LL Mar 20 17:46:21.255194 containerd[1491]: time="2025-03-20T17:46:21.255151485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" id:\"8ddf364d0117d25658206b99fcb1184712a841a58d89ebd1c6b4417b2131620e\" pid:5203 exited_at:{seconds:1742492781 nanos:254856562}" Mar 20 17:46:23.376983 containerd[1491]: time="2025-03-20T17:46:23.376932081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd3979fd708d66a4cacf20d3ce2fe0be6264b8a0c5733e0d6f257e6c76bd2544\" id:\"ffedd6e617ab8a19dd5c59626b8a717470e0ce3a61cb0d8feabc7b79731d0c36\" pid:5235 exited_at:{seconds:1742492783 nanos:376071713}" Mar 20 17:46:23.382873 sshd[4377]: Connection closed by 10.0.0.1 port 50148 Mar 20 17:46:23.384782 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Mar 20 17:46:23.388412 systemd[1]: sshd@26-10.0.0.10:22-10.0.0.1:50148.service: Deactivated successfully. Mar 20 17:46:23.390411 systemd[1]: session-27.scope: Deactivated successfully. Mar 20 17:46:23.391066 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Mar 20 17:46:23.392312 systemd-logind[1463]: Removed session 27.