May 13 23:36:39.901294 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:36:39.901315 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue May 13 22:07:09 -00 2025 May 13 23:36:39.901325 kernel: KASLR enabled May 13 23:36:39.901330 kernel: efi: EFI v2.7 by EDK II May 13 23:36:39.901345 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 13 23:36:39.901350 kernel: random: crng init done May 13 23:36:39.901357 kernel: secureboot: Secure boot disabled May 13 23:36:39.901363 kernel: ACPI: Early table checksum verification disabled May 13 23:36:39.901369 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 23:36:39.901378 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:36:39.901384 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901390 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901395 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901401 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901408 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901416 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901422 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901428 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901434 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:36:39.901440 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:36:39.901447 kernel: NUMA: Failed to initialise from firmware May 13 23:36:39.901453 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:36:39.901459 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 23:36:39.901465 kernel: Zone ranges: May 13 23:36:39.901471 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:36:39.901479 kernel: DMA32 empty May 13 23:36:39.901485 kernel: Normal empty May 13 23:36:39.901491 kernel: Movable zone start for each node May 13 23:36:39.901497 kernel: Early memory node ranges May 13 23:36:39.901503 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 13 23:36:39.901509 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 13 23:36:39.901515 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 13 23:36:39.901521 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 23:36:39.901528 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 23:36:39.901534 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 23:36:39.901540 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 23:36:39.901546 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 23:36:39.901553 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:36:39.901560 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:36:39.901566 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:36:39.901575 kernel: psci: probing for conduit method from ACPI. May 13 23:36:39.901581 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:36:39.901588 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:36:39.901595 kernel: psci: Trusted OS migration not required May 13 23:36:39.901602 kernel: psci: SMC Calling Convention v1.1 May 13 23:36:39.901609 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:36:39.901615 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:36:39.901622 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:36:39.901629 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:36:39.901635 kernel: Detected PIPT I-cache on CPU0 May 13 23:36:39.901642 kernel: CPU features: detected: GIC system register CPU interface May 13 23:36:39.901648 kernel: CPU features: detected: Hardware dirty bit management May 13 23:36:39.901655 kernel: CPU features: detected: Spectre-v4 May 13 23:36:39.901663 kernel: CPU features: detected: Spectre-BHB May 13 23:36:39.901669 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:36:39.901676 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:36:39.901683 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:36:39.901689 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:36:39.901695 kernel: alternatives: applying boot alternatives May 13 23:36:39.901703 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:36:39.901710 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:36:39.901717 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:36:39.901724 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:36:39.901730 kernel: Fallback order for Node 0: 0 May 13 23:36:39.901738 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:36:39.901745 kernel: Policy zone: DMA May 13 23:36:39.901751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:36:39.901757 kernel: software IO TLB: area num 4. May 13 23:36:39.901764 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 23:36:39.901771 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 13 23:36:39.901778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:36:39.901784 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:36:39.901791 kernel: rcu: RCU event tracing is enabled. May 13 23:36:39.901809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:36:39.901817 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:36:39.901824 kernel: Tracing variant of Tasks RCU enabled. May 13 23:36:39.901833 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:36:39.901840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:36:39.901846 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:36:39.901853 kernel: GICv3: 256 SPIs implemented May 13 23:36:39.901859 kernel: GICv3: 0 Extended SPIs implemented May 13 23:36:39.901866 kernel: Root IRQ handler: gic_handle_irq May 13 23:36:39.901872 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:36:39.901878 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:36:39.901885 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:36:39.901891 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:36:39.901898 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:36:39.901906 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:36:39.901913 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:36:39.901919 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:36:39.901926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:36:39.901933 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:36:39.901939 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:36:39.901946 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:36:39.901953 kernel: arm-pv: using stolen time PV May 13 23:36:39.901960 kernel: Console: colour dummy device 80x25 May 13 23:36:39.901966 kernel: ACPI: Core revision 20230628 May 13 23:36:39.901973 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:36:39.901981 kernel: pid_max: default: 32768 minimum: 301 May 13 23:36:39.901988 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:36:39.901995 kernel: landlock: Up and running. May 13 23:36:39.902001 kernel: SELinux: Initializing. May 13 23:36:39.902008 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:36:39.902015 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:36:39.902022 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:36:39.902029 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:36:39.902035 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:36:39.902044 kernel: rcu: Hierarchical SRCU implementation. May 13 23:36:39.902051 kernel: rcu: Max phase no-delay instances is 400. May 13 23:36:39.902057 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:36:39.902064 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:36:39.902071 kernel: Remapping and enabling EFI services. May 13 23:36:39.902077 kernel: smp: Bringing up secondary CPUs ... May 13 23:36:39.902084 kernel: Detected PIPT I-cache on CPU1 May 13 23:36:39.902091 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:36:39.902098 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:36:39.902106 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:36:39.902113 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:36:39.902124 kernel: Detected PIPT I-cache on CPU2 May 13 23:36:39.902133 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:36:39.902140 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:36:39.902148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:36:39.902154 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:36:39.902161 kernel: Detected PIPT I-cache on CPU3 May 13 23:36:39.902169 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:36:39.902176 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:36:39.902184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:36:39.902191 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:36:39.902198 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:36:39.902205 kernel: SMP: Total of 4 processors activated. May 13 23:36:39.902212 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:36:39.902219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:36:39.902227 kernel: CPU features: detected: Common not Private translations May 13 23:36:39.902235 kernel: CPU features: detected: CRC32 instructions May 13 23:36:39.902242 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:36:39.902249 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:36:39.902256 kernel: CPU features: detected: LSE atomic instructions May 13 23:36:39.902263 kernel: CPU features: detected: Privileged Access Never May 13 23:36:39.902270 kernel: CPU features: detected: RAS Extension Support May 13 23:36:39.902277 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:36:39.902284 kernel: CPU: All CPU(s) started at EL1 May 13 23:36:39.902291 kernel: alternatives: applying system-wide alternatives May 13 23:36:39.902299 kernel: devtmpfs: initialized May 13 23:36:39.902306 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:36:39.902313 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:36:39.902321 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:36:39.902327 kernel: SMBIOS 3.0.0 present. May 13 23:36:39.902338 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:36:39.902345 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:36:39.902352 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:36:39.902360 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:36:39.902368 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:36:39.902375 kernel: audit: initializing netlink subsys (disabled) May 13 23:36:39.902383 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 13 23:36:39.902390 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:36:39.902397 kernel: cpuidle: using governor menu May 13 23:36:39.902404 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:36:39.902411 kernel: ASID allocator initialised with 32768 entries May 13 23:36:39.902418 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:36:39.902425 kernel: Serial: AMBA PL011 UART driver May 13 23:36:39.902433 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:36:39.902440 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:36:39.902447 kernel: Modules: 509264 pages in range for PLT usage May 13 23:36:39.902455 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:36:39.902462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:36:39.902469 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:36:39.902476 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:36:39.902483 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:36:39.902490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:36:39.902498 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:36:39.902505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:36:39.902512 kernel: ACPI: Added _OSI(Module Device) May 13 23:36:39.902519 kernel: ACPI: Added _OSI(Processor Device) May 13 23:36:39.902526 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:36:39.902533 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:36:39.902540 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:36:39.902547 kernel: ACPI: Interpreter enabled May 13 23:36:39.902554 kernel: ACPI: Using GIC for interrupt routing May 13 23:36:39.902561 kernel: ACPI: MCFG table detected, 1 entries May 13 23:36:39.902569 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:36:39.902576 kernel: printk: console [ttyAMA0] enabled May 13 23:36:39.902583 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:36:39.902723 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:36:39.902854 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:36:39.902938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:36:39.903003 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:36:39.903070 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:36:39.903080 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:36:39.903087 kernel: PCI host bridge to bus 0000:00 May 13 23:36:39.903158 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:36:39.903216 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:36:39.903273 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:36:39.903329 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:36:39.903425 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:36:39.903502 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:36:39.903570 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:36:39.903635 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:36:39.903701 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:36:39.903766 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:36:39.903846 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:36:39.903920 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:36:39.903980 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:36:39.904038 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:36:39.904096 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:36:39.904105 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:36:39.904113 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:36:39.904120 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:36:39.904129 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:36:39.904136 kernel: iommu: Default domain type: Translated May 13 23:36:39.904143 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:36:39.904150 kernel: efivars: Registered efivars operations May 13 23:36:39.904157 kernel: vgaarb: loaded May 13 23:36:39.904165 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:36:39.904172 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:36:39.904179 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:36:39.904186 kernel: pnp: PnP ACPI init May 13 23:36:39.904264 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:36:39.904275 kernel: pnp: PnP ACPI: found 1 devices May 13 23:36:39.904282 kernel: NET: Registered PF_INET protocol family May 13 23:36:39.904289 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:36:39.904297 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:36:39.904304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:36:39.904311 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:36:39.904318 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:36:39.904327 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:36:39.904343 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:36:39.904351 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:36:39.904358 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:36:39.904365 kernel: PCI: CLS 0 bytes, default 64 May 13 23:36:39.904372 kernel: kvm [1]: HYP mode not available May 13 23:36:39.904379 kernel: Initialise system trusted keyrings May 13 23:36:39.904386 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:36:39.904393 kernel: Key type asymmetric registered May 13 23:36:39.904403 kernel: Asymmetric key parser 'x509' registered May 13 23:36:39.904410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:36:39.904417 kernel: io scheduler mq-deadline registered May 13 23:36:39.904424 kernel: io scheduler kyber registered May 13 23:36:39.904431 kernel: io scheduler bfq registered May 13 23:36:39.904438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:36:39.904445 kernel: ACPI: button: Power Button [PWRB] May 13 23:36:39.904453 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:36:39.904529 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:36:39.904539 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:36:39.904549 kernel: thunder_xcv, ver 1.0 May 13 23:36:39.904556 kernel: thunder_bgx, ver 1.0 May 13 23:36:39.904563 kernel: nicpf, ver 1.0 May 13 23:36:39.904570 kernel: nicvf, ver 1.0 May 13 23:36:39.904647 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:36:39.904710 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:36:39 UTC (1747179399) May 13 23:36:39.904720 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:36:39.904727 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:36:39.904736 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:36:39.904744 kernel: watchdog: Hard watchdog permanently disabled May 13 23:36:39.904751 kernel: NET: Registered PF_INET6 protocol family May 13 23:36:39.904758 kernel: Segment Routing with IPv6 May 13 23:36:39.904765 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:36:39.904772 kernel: NET: Registered PF_PACKET protocol family May 13 23:36:39.904779 kernel: Key type dns_resolver registered May 13 23:36:39.904786 kernel: registered taskstats version 1 May 13 23:36:39.904793 kernel: Loading compiled-in X.509 certificates May 13 23:36:39.904841 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: a696ab665a89a9a0c31af520821335479551e0bb' May 13 23:36:39.904849 kernel: Key type .fscrypt registered May 13 23:36:39.904856 kernel: Key type fscrypt-provisioning registered May 13 23:36:39.904864 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:36:39.904871 kernel: ima: Allocated hash algorithm: sha1 May 13 23:36:39.904878 kernel: ima: No architecture policies found May 13 23:36:39.904885 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:36:39.904892 kernel: clk: Disabling unused clocks May 13 23:36:39.904902 kernel: Freeing unused kernel memory: 38336K May 13 23:36:39.904909 kernel: Run /init as init process May 13 23:36:39.904916 kernel: with arguments: May 13 23:36:39.904923 kernel: /init May 13 23:36:39.904930 kernel: with environment: May 13 23:36:39.904937 kernel: HOME=/ May 13 23:36:39.904944 kernel: TERM=linux May 13 23:36:39.904951 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:36:39.904960 systemd[1]: Successfully made /usr/ read-only. May 13 23:36:39.904971 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:36:39.904979 systemd[1]: Detected virtualization kvm. May 13 23:36:39.904987 systemd[1]: Detected architecture arm64. May 13 23:36:39.904994 systemd[1]: Running in initrd. May 13 23:36:39.905002 systemd[1]: No hostname configured, using default hostname. May 13 23:36:39.905009 systemd[1]: Hostname set to . May 13 23:36:39.905017 systemd[1]: Initializing machine ID from VM UUID. May 13 23:36:39.905026 systemd[1]: Queued start job for default target initrd.target. May 13 23:36:39.905034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:36:39.905042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:36:39.905050 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:36:39.905058 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:36:39.905066 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:36:39.905074 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:36:39.905085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:36:39.905093 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:36:39.905101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:36:39.905109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:36:39.905116 systemd[1]: Reached target paths.target - Path Units. May 13 23:36:39.905124 systemd[1]: Reached target slices.target - Slice Units. May 13 23:36:39.905132 systemd[1]: Reached target swap.target - Swaps. May 13 23:36:39.905140 systemd[1]: Reached target timers.target - Timer Units. May 13 23:36:39.905147 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:36:39.905157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:36:39.905165 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:36:39.905172 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:36:39.905180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:36:39.905188 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:36:39.905196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:36:39.905203 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:36:39.905211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:36:39.905220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:36:39.905228 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:36:39.905236 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:36:39.905244 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:36:39.905251 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:36:39.905259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:36:39.905267 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:36:39.905275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:36:39.905284 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:36:39.905293 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:36:39.905320 systemd-journald[239]: Collecting audit messages is disabled. May 13 23:36:39.905348 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:36:39.905357 systemd-journald[239]: Journal started May 13 23:36:39.905375 systemd-journald[239]: Runtime Journal (/run/log/journal/dd592f8528f742d8a14eac9317a4797f) is 5.9M, max 47.3M, 41.4M free. May 13 23:36:39.896496 systemd-modules-load[241]: Inserted module 'overlay' May 13 23:36:39.908830 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:36:39.908859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:36:39.911091 systemd-modules-load[241]: Inserted module 'br_netfilter' May 13 23:36:39.912742 kernel: Bridge firewalling registered May 13 23:36:39.912761 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:36:39.914070 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:36:39.925956 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:36:39.927831 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:36:39.930603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:36:39.932426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:36:39.940654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:36:39.945169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:36:39.946687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:36:39.966975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:36:39.968235 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:36:39.970947 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:36:39.984025 dracut-cmdline[280]: dracut-dracut-053 May 13 23:36:39.986578 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:36:39.999104 systemd-resolved[277]: Positive Trust Anchors: May 13 23:36:39.999124 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:36:39.999155 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:36:40.004255 systemd-resolved[277]: Defaulting to hostname 'linux'. May 13 23:36:40.008226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:36:40.009442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:36:40.059828 kernel: SCSI subsystem initialized May 13 23:36:40.063811 kernel: Loading iSCSI transport class v2.0-870. May 13 23:36:40.071832 kernel: iscsi: registered transport (tcp) May 13 23:36:40.084213 kernel: iscsi: registered transport (qla4xxx) May 13 23:36:40.084236 kernel: QLogic iSCSI HBA Driver May 13 23:36:40.128861 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:36:40.142960 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:36:40.161119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:36:40.161179 kernel: device-mapper: uevent: version 1.0.3 May 13 23:36:40.164849 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:36:40.210843 kernel: raid6: neonx8 gen() 15789 MB/s May 13 23:36:40.227815 kernel: raid6: neonx4 gen() 15798 MB/s May 13 23:36:40.244820 kernel: raid6: neonx2 gen() 13199 MB/s May 13 23:36:40.261820 kernel: raid6: neonx1 gen() 10472 MB/s May 13 23:36:40.278823 kernel: raid6: int64x8 gen() 6788 MB/s May 13 23:36:40.295817 kernel: raid6: int64x4 gen() 7347 MB/s May 13 23:36:40.312823 kernel: raid6: int64x2 gen() 6106 MB/s May 13 23:36:40.329944 kernel: raid6: int64x1 gen() 5047 MB/s May 13 23:36:40.329959 kernel: raid6: using algorithm neonx4 gen() 15798 MB/s May 13 23:36:40.347932 kernel: raid6: .... xor() 12511 MB/s, rmw enabled May 13 23:36:40.347977 kernel: raid6: using neon recovery algorithm May 13 23:36:40.353225 kernel: xor: measuring software checksum speed May 13 23:36:40.353250 kernel: 8regs : 21607 MB/sec May 13 23:36:40.353911 kernel: 32regs : 20902 MB/sec May 13 23:36:40.355159 kernel: arm64_neon : 27813 MB/sec May 13 23:36:40.355188 kernel: xor: using function: arm64_neon (27813 MB/sec) May 13 23:36:40.405841 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:36:40.415674 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:36:40.428975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:36:40.442222 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 13 23:36:40.445868 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:36:40.451953 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:36:40.465145 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 13 23:36:40.491880 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:36:40.511967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:36:40.551881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:36:40.560955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:36:40.572107 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:36:40.574184 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:36:40.576745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:36:40.579584 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:36:40.587947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:36:40.596713 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:36:40.604200 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:36:40.604379 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:36:40.608960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:36:40.608996 kernel: GPT:9289727 != 19775487 May 13 23:36:40.609014 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:36:40.610153 kernel: GPT:9289727 != 19775487 May 13 23:36:40.610185 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:36:40.611583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:36:40.617976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:36:40.618105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:36:40.621974 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:36:40.623210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:36:40.623690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:36:40.628084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:36:40.637494 kernel: BTRFS: device fsid 3ace022a-b896-4c57-9fc3-590600d2a560 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (510) May 13 23:36:40.637533 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520) May 13 23:36:40.644021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:36:40.657411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:36:40.665673 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:36:40.673724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:36:40.684600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:36:40.685815 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:36:40.695219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:36:40.707942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:36:40.709720 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:36:40.715470 disk-uuid[552]: Primary Header is updated. May 13 23:36:40.715470 disk-uuid[552]: Secondary Entries is updated. May 13 23:36:40.715470 disk-uuid[552]: Secondary Header is updated. May 13 23:36:40.720820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:36:40.725903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:36:41.729836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:36:41.730420 disk-uuid[554]: The operation has completed successfully. May 13 23:36:41.756995 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:36:41.757092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:36:41.791015 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:36:41.794170 sh[575]: Success May 13 23:36:41.809834 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:36:41.856696 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:36:41.858866 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:36:41.861226 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:36:41.872534 kernel: BTRFS info (device dm-0): first mount of filesystem 3ace022a-b896-4c57-9fc3-590600d2a560 May 13 23:36:41.872575 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:36:41.872586 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:36:41.874446 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:36:41.874469 kernel: BTRFS info (device dm-0): using free space tree May 13 23:36:41.878262 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:36:41.879669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:36:41.888979 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:36:41.890593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:36:41.905613 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:36:41.905655 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:36:41.905666 kernel: BTRFS info (device vda6): using free space tree May 13 23:36:41.908838 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:36:41.912917 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:36:41.915411 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:36:41.920999 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:36:41.983596 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:36:41.990933 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:36:42.012579 ignition[663]: Ignition 2.20.0 May 13 23:36:42.012589 ignition[663]: Stage: fetch-offline May 13 23:36:42.012623 ignition[663]: no configs at "/usr/lib/ignition/base.d" May 13 23:36:42.012632 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:42.012909 ignition[663]: parsed url from cmdline: "" May 13 23:36:42.012913 ignition[663]: no config URL provided May 13 23:36:42.012917 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:36:42.012925 ignition[663]: no config at "/usr/lib/ignition/user.ign" May 13 23:36:42.012948 ignition[663]: op(1): [started] loading QEMU firmware config module May 13 23:36:42.012952 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:36:42.023424 ignition[663]: op(1): [finished] loading QEMU firmware config module May 13 23:36:42.024535 systemd-networkd[764]: lo: Link UP May 13 23:36:42.024539 systemd-networkd[764]: lo: Gained carrier May 13 23:36:42.025360 systemd-networkd[764]: Enumeration completed May 13 23:36:42.025905 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:36:42.026772 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:36:42.026775 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:36:42.027529 systemd-networkd[764]: eth0: Link UP May 13 23:36:42.027532 systemd-networkd[764]: eth0: Gained carrier May 13 23:36:42.027538 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:36:42.029246 systemd[1]: Reached target network.target - Network. May 13 23:36:42.055858 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:36:42.074453 ignition[663]: parsing config with SHA512: 77a8e80fc34157591c7276c390cb9f16db921fb75346968445829048d681113144bb855d62d65a1f0f40de91c1dc27cf8d268425e82baa47e79e0878ff9ec8bc May 13 23:36:42.080890 unknown[663]: fetched base config from "system" May 13 23:36:42.080900 unknown[663]: fetched user config from "qemu" May 13 23:36:42.081449 ignition[663]: fetch-offline: fetch-offline passed May 13 23:36:42.083276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:36:42.081525 ignition[663]: Ignition finished successfully May 13 23:36:42.084947 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:36:42.091045 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:36:42.103535 ignition[770]: Ignition 2.20.0 May 13 23:36:42.103546 ignition[770]: Stage: kargs May 13 23:36:42.103703 ignition[770]: no configs at "/usr/lib/ignition/base.d" May 13 23:36:42.103712 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:42.106590 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:36:42.104589 ignition[770]: kargs: kargs passed May 13 23:36:42.104636 ignition[770]: Ignition finished successfully May 13 23:36:42.113988 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:36:42.123165 ignition[780]: Ignition 2.20.0 May 13 23:36:42.123176 ignition[780]: Stage: disks May 13 23:36:42.123356 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 13 23:36:42.125972 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:36:42.123366 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:42.127299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:36:42.124219 ignition[780]: disks: disks passed May 13 23:36:42.128953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:36:42.124269 ignition[780]: Ignition finished successfully May 13 23:36:42.130986 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:36:42.132771 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:36:42.134224 systemd[1]: Reached target basic.target - Basic System. May 13 23:36:42.141958 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:36:42.149910 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.79 May 13 23:36:42.149924 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. May 13 23:36:42.152805 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:36:42.200709 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:36:42.210942 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:36:42.252826 kernel: EXT4-fs (vda9): mounted filesystem 2a058080-4242-485a-9945-403b4258c5f5 r/w with ordered data mode. Quota mode: none. May 13 23:36:42.252866 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:36:42.254101 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:36:42.264885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:36:42.267088 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:36:42.268106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:36:42.268147 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:36:42.268171 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:36:42.272246 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:36:42.275352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:36:42.280886 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) May 13 23:36:42.280910 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:36:42.280921 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:36:42.282813 kernel: BTRFS info (device vda6): using free space tree May 13 23:36:42.284890 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:36:42.285841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:36:42.315842 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:36:42.320435 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory May 13 23:36:42.323618 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:36:42.327078 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:36:42.412947 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:36:42.428909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:36:42.431233 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:36:42.436837 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:36:42.456445 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:36:42.458545 ignition[913]: INFO : Ignition 2.20.0 May 13 23:36:42.458545 ignition[913]: INFO : Stage: mount May 13 23:36:42.458545 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:36:42.458545 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:42.464198 ignition[913]: INFO : mount: mount passed May 13 23:36:42.464198 ignition[913]: INFO : Ignition finished successfully May 13 23:36:42.461024 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:36:42.474918 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:36:42.996922 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:36:43.008995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:36:43.014842 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) May 13 23:36:43.016990 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:36:43.017005 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:36:43.017016 kernel: BTRFS info (device vda6): using free space tree May 13 23:36:43.020821 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:36:43.022122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:36:43.039630 ignition[945]: INFO : Ignition 2.20.0 May 13 23:36:43.039630 ignition[945]: INFO : Stage: files May 13 23:36:43.041320 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:36:43.041320 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:43.041320 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 13 23:36:43.044731 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:36:43.044731 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:36:43.044731 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:36:43.044731 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:36:43.044731 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:36:43.044129 unknown[945]: wrote ssh authorized keys file for user: core May 13 23:36:43.052257 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:36:43.052257 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:36:43.107525 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:36:43.249099 systemd-networkd[764]: eth0: Gained IPv6LL May 13 23:36:43.478564 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:36:43.478564 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:36:43.482517 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 23:36:43.779545 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 23:36:43.865031 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:36:43.867000 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:36:44.045147 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 23:36:44.421587 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:36:44.421587 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 23:36:44.425361 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:36:44.439479 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:36:44.441657 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:36:44.443326 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:36:44.443326 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 23:36:44.443326 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:36:44.443326 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:36:44.443326 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:36:44.443326 ignition[945]: INFO : files: files passed May 13 23:36:44.443326 ignition[945]: INFO : Ignition finished successfully May 13 23:36:44.443786 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:36:44.453029 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:36:44.456931 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:36:44.458649 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:36:44.459861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:36:44.464918 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:36:44.468897 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:36:44.468897 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:36:44.472204 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:36:44.473244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:36:44.475062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:36:44.482935 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:36:44.502895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:36:44.503050 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:36:44.505454 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:36:44.507252 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:36:44.509122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:36:44.516960 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:36:44.528873 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:36:44.531695 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:36:44.544805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:36:44.546240 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:36:44.548589 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:36:44.550732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:36:44.550893 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:36:44.553748 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:36:44.555928 systemd[1]: Stopped target basic.target - Basic System. May 13 23:36:44.557647 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:36:44.559467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:36:44.561495 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:36:44.563523 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:36:44.565441 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:36:44.567532 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:36:44.569553 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:36:44.571391 systemd[1]: Stopped target swap.target - Swaps. May 13 23:36:44.572977 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:36:44.573120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:36:44.575569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:36:44.577609 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:36:44.579703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:36:44.579826 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:36:44.581938 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:36:44.582074 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:36:44.585028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:36:44.585154 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:36:44.587170 systemd[1]: Stopped target paths.target - Path Units. May 13 23:36:44.588754 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:36:44.588879 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:36:44.590924 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:36:44.592890 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:36:44.594640 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:36:44.594734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:36:44.596637 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:36:44.596725 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:36:44.599094 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:36:44.599224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:36:44.601147 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:36:44.601280 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:36:44.610007 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:36:44.611201 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:36:44.611342 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:36:44.615067 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:36:44.616444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:36:44.616580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:36:44.619085 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:36:44.619282 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:36:44.625475 ignition[999]: INFO : Ignition 2.20.0 May 13 23:36:44.625475 ignition[999]: INFO : Stage: umount May 13 23:36:44.625475 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:36:44.625475 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:36:44.630883 ignition[999]: INFO : umount: umount passed May 13 23:36:44.630883 ignition[999]: INFO : Ignition finished successfully May 13 23:36:44.627138 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:36:44.627273 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:36:44.629993 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:36:44.630085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:36:44.633381 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:36:44.635367 systemd[1]: Stopped target network.target - Network. May 13 23:36:44.636844 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:36:44.636939 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:36:44.638902 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:36:44.638956 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:36:44.640849 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:36:44.640906 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:36:44.642953 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:36:44.643000 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:36:44.645186 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:36:44.646907 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:36:44.653506 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:36:44.653625 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:36:44.657443 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:36:44.657704 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:36:44.657860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:36:44.661353 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:36:44.661968 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:36:44.662034 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:36:44.675955 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:36:44.676994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:36:44.677080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:36:44.679387 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:36:44.679440 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:36:44.682764 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:36:44.682835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:36:44.685033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:36:44.685083 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:36:44.688269 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:36:44.694042 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:36:44.694108 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:36:44.701408 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:36:44.701534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:36:44.703656 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:36:44.703761 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:36:44.707192 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:36:44.707260 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:36:44.709246 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:36:44.709369 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:36:44.712654 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:36:44.712742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:36:44.714037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:36:44.714072 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:36:44.716107 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:36:44.716160 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:36:44.719187 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:36:44.719239 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:36:44.721965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:36:44.722012 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:36:44.736970 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:36:44.738019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:36:44.738096 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:36:44.741261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:36:44.741311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:36:44.745464 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:36:44.745525 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:36:44.745871 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:36:44.745974 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:36:44.748774 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:36:44.751308 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:36:44.761793 systemd[1]: Switching root. May 13 23:36:44.799027 systemd-journald[239]: Journal stopped May 13 23:36:45.583826 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 13 23:36:45.583880 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:36:45.583892 kernel: SELinux: policy capability open_perms=1 May 13 23:36:45.583902 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:36:45.583912 kernel: SELinux: policy capability always_check_network=0 May 13 23:36:45.583921 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:36:45.583934 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:36:45.583943 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:36:45.583952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:36:45.583961 kernel: audit: type=1403 audit(1747179404.955:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:36:45.583971 systemd[1]: Successfully loaded SELinux policy in 35.703ms. May 13 23:36:45.583993 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.069ms. May 13 23:36:45.584005 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:36:45.584016 systemd[1]: Detected virtualization kvm. May 13 23:36:45.584026 systemd[1]: Detected architecture arm64. May 13 23:36:45.584037 systemd[1]: Detected first boot. May 13 23:36:45.584047 systemd[1]: Initializing machine ID from VM UUID. May 13 23:36:45.584057 zram_generator::config[1045]: No configuration found. May 13 23:36:45.584068 kernel: NET: Registered PF_VSOCK protocol family May 13 23:36:45.584079 systemd[1]: Populated /etc with preset unit settings. May 13 23:36:45.584090 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:36:45.584147 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:36:45.584165 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:36:45.584179 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:36:45.584190 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:36:45.584200 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:36:45.584211 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:36:45.584221 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:36:45.584231 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:36:45.584241 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:36:45.584252 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:36:45.584263 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:36:45.584274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:36:45.584284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:36:45.584295 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:36:45.584305 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:36:45.584315 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:36:45.584326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:36:45.584336 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:36:45.584350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:36:45.584364 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:36:45.584417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:36:45.584431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:36:45.584442 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:36:45.584452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:36:45.584468 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:36:45.584478 systemd[1]: Reached target slices.target - Slice Units. May 13 23:36:45.584488 systemd[1]: Reached target swap.target - Swaps. May 13 23:36:45.584500 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:36:45.584510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:36:45.584548 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:36:45.584564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:36:45.584575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:36:45.584585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:36:45.584596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:36:45.584606 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:36:45.584617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:36:45.584629 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:36:45.584639 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:36:45.584649 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:36:45.584659 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:36:45.584672 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:36:45.584683 systemd[1]: Reached target machines.target - Containers. May 13 23:36:45.584692 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:36:45.584703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:36:45.584715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:36:45.584726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:36:45.584736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:36:45.584746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:36:45.584756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:36:45.584767 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:36:45.584777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:36:45.584787 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:36:45.584797 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:36:45.584821 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:36:45.584832 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:36:45.584842 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:36:45.584853 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:36:45.584863 kernel: fuse: init (API version 7.39) May 13 23:36:45.584872 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:36:45.584882 kernel: loop: module loaded May 13 23:36:45.584891 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:36:45.584903 kernel: ACPI: bus type drm_connector registered May 13 23:36:45.584914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:36:45.584924 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:36:45.584935 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:36:45.584945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:36:45.584955 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:36:45.584966 systemd[1]: Stopped verity-setup.service. May 13 23:36:45.585001 systemd-journald[1117]: Collecting audit messages is disabled. May 13 23:36:45.585023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:36:45.585034 systemd-journald[1117]: Journal started May 13 23:36:45.585055 systemd-journald[1117]: Runtime Journal (/run/log/journal/dd592f8528f742d8a14eac9317a4797f) is 5.9M, max 47.3M, 41.4M free. May 13 23:36:45.374776 systemd[1]: Queued start job for default target multi-user.target. May 13 23:36:45.385756 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:36:45.386174 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:36:45.587172 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:36:45.587877 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:36:45.589178 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:36:45.590282 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:36:45.591493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:36:45.592720 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:36:45.594047 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:36:45.596871 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:36:45.598480 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:36:45.598672 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:36:45.600259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:36:45.600433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:36:45.601938 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:36:45.602122 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:36:45.603488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:36:45.603653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:36:45.605221 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:36:45.605385 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:36:45.607108 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:36:45.607310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:36:45.608859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:36:45.610376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:36:45.612016 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:36:45.613673 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:36:45.626661 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:36:45.633903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:36:45.636110 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:36:45.637326 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:36:45.637367 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:36:45.639540 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:36:45.641912 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:36:45.644203 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:36:45.645465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:36:45.646792 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:36:45.648870 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:36:45.650173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:36:45.651095 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:36:45.652533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:36:45.655620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:36:45.660988 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:36:45.663503 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:36:45.666700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:36:45.668996 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:36:45.670501 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:36:45.673945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:36:45.675375 systemd-journald[1117]: Time spent on flushing to /var/log/journal/dd592f8528f742d8a14eac9317a4797f is 20.206ms for 879 entries. May 13 23:36:45.675375 systemd-journald[1117]: System Journal (/var/log/journal/dd592f8528f742d8a14eac9317a4797f) is 8M, max 195.6M, 187.6M free. May 13 23:36:45.699539 systemd-journald[1117]: Received client request to flush runtime journal. May 13 23:36:45.699601 kernel: loop0: detected capacity change from 0 to 113512 May 13 23:36:45.680181 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:36:45.683283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:36:45.691662 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:36:45.708021 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:36:45.712842 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:36:45.712875 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:36:45.716829 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:36:45.724444 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:36:45.732200 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:36:45.739989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:36:45.742649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:36:45.742848 kernel: loop1: detected capacity change from 0 to 194096 May 13 23:36:45.744825 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:36:45.773307 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 13 23:36:45.773323 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. May 13 23:36:45.778100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:36:45.781908 kernel: loop2: detected capacity change from 0 to 123192 May 13 23:36:45.836852 kernel: loop3: detected capacity change from 0 to 113512 May 13 23:36:45.842836 kernel: loop4: detected capacity change from 0 to 194096 May 13 23:36:45.849820 kernel: loop5: detected capacity change from 0 to 123192 May 13 23:36:45.854392 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:36:45.854773 (sd-merge)[1186]: Merged extensions into '/usr'. May 13 23:36:45.862694 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:36:45.862711 systemd[1]: Reloading... May 13 23:36:45.928549 zram_generator::config[1215]: No configuration found. May 13 23:36:46.007198 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:36:46.041075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:36:46.093164 systemd[1]: Reloading finished in 230 ms. May 13 23:36:46.112029 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:36:46.113512 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:36:46.125982 systemd[1]: Starting ensure-sysext.service... May 13 23:36:46.127782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:36:46.141038 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... May 13 23:36:46.141052 systemd[1]: Reloading... May 13 23:36:46.147499 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:36:46.147995 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:36:46.148717 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:36:46.149024 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 23:36:46.149159 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. May 13 23:36:46.151896 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:36:46.151994 systemd-tmpfiles[1250]: Skipping /boot May 13 23:36:46.160772 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:36:46.160959 systemd-tmpfiles[1250]: Skipping /boot May 13 23:36:46.191993 zram_generator::config[1279]: No configuration found. May 13 23:36:46.276435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:36:46.329136 systemd[1]: Reloading finished in 187 ms. May 13 23:36:46.340627 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:36:46.356871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:36:46.364609 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:36:46.367065 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:36:46.369398 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:36:46.374093 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:36:46.377063 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:36:46.380125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:36:46.385090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:36:46.388372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:36:46.392183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:36:46.396222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:36:46.398489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:36:46.398620 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:36:46.402751 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:36:46.408934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:36:46.411253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:36:46.411411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:36:46.414194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:36:46.414340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:36:46.416656 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:36:46.417134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:36:46.423851 systemd-udevd[1320]: Using default interface naming scheme 'v255'. May 13 23:36:46.425525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:36:46.434066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:36:46.441085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:36:46.445432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:36:46.448090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:36:46.448225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:36:46.451168 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:36:46.453672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:36:46.455742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:36:46.456248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:36:46.457962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:36:46.458121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:36:46.460139 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:36:46.460884 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:36:46.462530 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:36:46.464091 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:36:46.467836 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:36:46.470952 augenrules[1357]: No rules May 13 23:36:46.474267 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:36:46.476116 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:36:46.477979 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:36:46.492641 systemd[1]: Finished ensure-sysext.service. May 13 23:36:46.507061 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:36:46.508137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:36:46.510955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:36:46.513999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:36:46.516068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:36:46.518565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:36:46.520082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:36:46.520131 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:36:46.522052 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:36:46.527077 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:36:46.528339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:36:46.528940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:36:46.529105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:36:46.530653 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:36:46.532348 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:36:46.533840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:36:46.534017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:36:46.535723 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:36:46.535912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:36:46.539683 augenrules[1389]: /sbin/augenrules: No change May 13 23:36:46.542918 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:36:46.542996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:36:46.543048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:36:46.550263 augenrules[1420]: No rules May 13 23:36:46.551789 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:36:46.552160 systemd-resolved[1318]: Positive Trust Anchors: May 13 23:36:46.552423 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:36:46.555742 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:36:46.555780 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:36:46.563121 systemd-resolved[1318]: Defaulting to hostname 'linux'. May 13 23:36:46.573253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:36:46.574727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:36:46.585822 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1371) May 13 23:36:46.604146 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:36:46.608270 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:36:46.626579 systemd-networkd[1400]: lo: Link UP May 13 23:36:46.626588 systemd-networkd[1400]: lo: Gained carrier May 13 23:36:46.629310 systemd-networkd[1400]: Enumeration completed May 13 23:36:46.629410 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:36:46.630885 systemd[1]: Reached target network.target - Network. May 13 23:36:46.631775 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:36:46.631784 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:36:46.632313 systemd-networkd[1400]: eth0: Link UP May 13 23:36:46.632320 systemd-networkd[1400]: eth0: Gained carrier May 13 23:36:46.632334 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:36:46.638977 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:36:46.641733 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:36:46.645287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:36:46.648227 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.79/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:36:46.650944 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. May 13 23:36:46.651350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:36:46.653353 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:36:46.653412 systemd-timesyncd[1402]: Initial clock synchronization to Tue 2025-05-13 23:36:46.261334 UTC. May 13 23:36:46.660243 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:36:46.681090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:36:46.682781 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:36:46.696435 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:36:46.705012 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:36:46.715709 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:36:46.717081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:36:46.746378 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:36:46.747942 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:36:46.749137 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:36:46.750292 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:36:46.751570 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:36:46.753079 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:36:46.754259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:36:46.755540 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:36:46.757001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:36:46.757043 systemd[1]: Reached target paths.target - Path Units. May 13 23:36:46.757995 systemd[1]: Reached target timers.target - Timer Units. May 13 23:36:46.759762 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:36:46.762343 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:36:46.765617 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:36:46.767120 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:36:46.768454 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:36:46.771876 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:36:46.773322 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:36:46.775742 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:36:46.777418 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:36:46.778640 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:36:46.779655 systemd[1]: Reached target basic.target - Basic System. May 13 23:36:46.780718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:36:46.780753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:36:46.781686 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:36:46.783598 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:36:46.785610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:36:46.787501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:36:46.795010 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:36:46.796373 jq[1454]: false May 13 23:36:46.796071 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:36:46.797246 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:36:46.799396 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:36:46.803690 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:36:46.807961 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:36:46.812848 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:36:46.815643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:36:46.816522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:36:46.817256 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:36:46.820599 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:36:46.824948 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:36:46.826709 extend-filesystems[1455]: Found loop3 May 13 23:36:46.827684 extend-filesystems[1455]: Found loop4 May 13 23:36:46.827684 extend-filesystems[1455]: Found loop5 May 13 23:36:46.827684 extend-filesystems[1455]: Found vda May 13 23:36:46.834877 extend-filesystems[1455]: Found vda1 May 13 23:36:46.834877 extend-filesystems[1455]: Found vda2 May 13 23:36:46.834877 extend-filesystems[1455]: Found vda3 May 13 23:36:46.834877 extend-filesystems[1455]: Found usr May 13 23:36:46.834877 extend-filesystems[1455]: Found vda4 May 13 23:36:46.834877 extend-filesystems[1455]: Found vda6 May 13 23:36:46.834877 extend-filesystems[1455]: Found vda7 May 13 23:36:46.834877 extend-filesystems[1455]: Found vda9 May 13 23:36:46.834877 extend-filesystems[1455]: Checking size of /dev/vda9 May 13 23:36:46.828493 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:36:46.850939 jq[1470]: true May 13 23:36:46.830211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:36:46.830514 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:36:46.830680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:36:46.835850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:36:46.836042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:36:46.852672 dbus-daemon[1453]: [system] SELinux support is enabled May 13 23:36:46.852503 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:36:46.853300 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:36:46.864218 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:36:46.864646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:36:46.867039 extend-filesystems[1455]: Resized partition /dev/vda9 May 13 23:36:46.868253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:36:46.868271 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:36:46.875114 update_engine[1468]: I20250513 23:36:46.873966 1468 main.cc:92] Flatcar Update Engine starting May 13 23:36:46.876956 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) May 13 23:36:46.880826 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:36:46.881488 systemd[1]: Started update-engine.service - Update Engine. May 13 23:36:46.882895 update_engine[1468]: I20250513 23:36:46.882072 1468 update_check_scheduler.cc:74] Next update check in 10m43s May 13 23:36:46.882922 jq[1476]: true May 13 23:36:46.885889 tar[1474]: linux-arm64/helm May 13 23:36:46.894202 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:36:46.907439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1370) May 13 23:36:46.912686 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:36:46.913970 systemd-logind[1466]: New seat seat0. May 13 23:36:46.914692 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:36:46.924827 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:36:46.940791 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:36:46.940791 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:36:46.940791 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:36:46.947069 extend-filesystems[1455]: Resized filesystem in /dev/vda9 May 13 23:36:46.942200 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:36:46.943938 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:36:46.965400 bash[1508]: Updated "/home/core/.ssh/authorized_keys" May 13 23:36:46.969177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:36:46.972745 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:36:47.003861 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:36:47.086376 containerd[1477]: time="2025-05-13T23:36:47.086253482Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 23:36:47.125765 containerd[1477]: time="2025-05-13T23:36:47.125715524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.127124 containerd[1477]: time="2025-05-13T23:36:47.127091174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 23:36:47.127152 containerd[1477]: time="2025-05-13T23:36:47.127128339Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 23:36:47.127152 containerd[1477]: time="2025-05-13T23:36:47.127145609Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 23:36:47.127325 containerd[1477]: time="2025-05-13T23:36:47.127304770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 23:36:47.127349 containerd[1477]: time="2025-05-13T23:36:47.127325768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.127394 containerd[1477]: time="2025-05-13T23:36:47.127378339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:36:47.127394 containerd[1477]: time="2025-05-13T23:36:47.127391958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127583795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127602054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127614303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127622748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127699323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.127912729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.128034724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.128047696Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.128117195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 23:36:47.128443 containerd[1477]: time="2025-05-13T23:36:47.128152649Z" level=info msg="metadata content store policy set" policy=shared May 13 23:36:47.131313 containerd[1477]: time="2025-05-13T23:36:47.131239120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 23:36:47.131313 containerd[1477]: time="2025-05-13T23:36:47.131295648Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 23:36:47.131313 containerd[1477]: time="2025-05-13T23:36:47.131311701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 23:36:47.131418 containerd[1477]: time="2025-05-13T23:36:47.131332890Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 23:36:47.131418 containerd[1477]: time="2025-05-13T23:36:47.131348144Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 23:36:47.131607 containerd[1477]: time="2025-05-13T23:36:47.131575206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.131964890Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132102824Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132118421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132133371Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132147065Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132158667Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132170460Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132183622Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132205571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132217972Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132229422Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132239693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132259550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 23:36:47.134824 containerd[1477]: time="2025-05-13T23:36:47.132272408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 23:36:47.134790 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132283401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132294243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132306073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132326425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132342173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132355259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132367394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132381317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132392387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132402695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132418787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132432519Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132450360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132462875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135174 containerd[1477]: time="2025-05-13T23:36:47.132472613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132633105Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132648055Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132657603Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132668711Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132678183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132691459Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132701349Z" level=info msg="NRI interface is disabled by configuration." May 13 23:36:47.135402 containerd[1477]: time="2025-05-13T23:36:47.132711734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133031006Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133074981Z" level=info msg="Connect containerd service" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133105223Z" level=info msg="using legacy CRI server" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133111766Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133326883Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.133931077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134356138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134391630Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134458771Z" level=info msg="Start subscribing containerd event" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134502289Z" level=info msg="Start recovering state" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134556040Z" level=info msg="Start event monitor" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134565093Z" level=info msg="Start snapshots syncer" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134573044Z" level=info msg="Start cni network conf syncer for default" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134578940Z" level=info msg="Start streaming server" May 13 23:36:47.135522 containerd[1477]: time="2025-05-13T23:36:47.134701430Z" level=info msg="containerd successfully booted in 0.049495s" May 13 23:36:47.253526 tar[1474]: linux-arm64/LICENSE May 13 23:36:47.253639 tar[1474]: linux-arm64/README.md May 13 23:36:47.270024 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:36:47.678404 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:36:47.697871 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:36:47.706062 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:36:47.711255 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:36:47.711465 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:36:47.715232 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:36:47.728237 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:36:47.742112 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:36:47.744605 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:36:47.746115 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:36:47.792898 systemd-networkd[1400]: eth0: Gained IPv6LL May 13 23:36:47.795241 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:36:47.797258 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:36:47.807289 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:36:47.809827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:36:47.811959 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:36:47.826336 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:36:47.830202 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:36:47.831709 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:36:47.837845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:36:48.277600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:36:48.279102 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:36:48.280509 systemd[1]: Startup finished in 599ms (kernel) + 5.262s (initrd) + 3.360s (userspace) = 9.222s. May 13 23:36:48.285143 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:36:48.771424 kubelet[1567]: E0513 23:36:48.771321 1567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:36:48.774008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:36:48.774149 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:36:48.774436 systemd[1]: kubelet.service: Consumed 801ms CPU time, 241.8M memory peak. May 13 23:36:52.309431 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:36:52.310605 systemd[1]: Started sshd@0-10.0.0.79:22-10.0.0.1:50434.service - OpenSSH per-connection server daemon (10.0.0.1:50434). May 13 23:36:52.408042 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 50434 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:52.409925 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:52.424542 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:36:52.434048 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:36:52.439526 systemd-logind[1466]: New session 1 of user core. May 13 23:36:52.443162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:36:52.447453 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:36:52.455622 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:36:52.457842 systemd-logind[1466]: New session c1 of user core. May 13 23:36:52.562280 systemd[1586]: Queued start job for default target default.target. May 13 23:36:52.570708 systemd[1586]: Created slice app.slice - User Application Slice. May 13 23:36:52.570728 systemd[1586]: Reached target paths.target - Paths. May 13 23:36:52.570768 systemd[1586]: Reached target timers.target - Timers. May 13 23:36:52.572807 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:36:52.586059 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:36:52.586171 systemd[1586]: Reached target sockets.target - Sockets. May 13 23:36:52.586212 systemd[1586]: Reached target basic.target - Basic System. May 13 23:36:52.586241 systemd[1586]: Reached target default.target - Main User Target. May 13 23:36:52.586266 systemd[1586]: Startup finished in 123ms. May 13 23:36:52.586338 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:36:52.587540 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:36:52.649251 systemd[1]: Started sshd@1-10.0.0.79:22-10.0.0.1:35434.service - OpenSSH per-connection server daemon (10.0.0.1:35434). May 13 23:36:52.691486 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 35434 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:52.692690 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:52.699412 systemd-logind[1466]: New session 2 of user core. May 13 23:36:52.712027 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:36:52.764596 sshd[1599]: Connection closed by 10.0.0.1 port 35434 May 13 23:36:52.764940 sshd-session[1597]: pam_unix(sshd:session): session closed for user core May 13 23:36:52.782299 systemd[1]: sshd@1-10.0.0.79:22-10.0.0.1:35434.service: Deactivated successfully. May 13 23:36:52.784261 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:36:52.785044 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. May 13 23:36:52.802135 systemd[1]: Started sshd@2-10.0.0.79:22-10.0.0.1:35438.service - OpenSSH per-connection server daemon (10.0.0.1:35438). May 13 23:36:52.802875 systemd-logind[1466]: Removed session 2. May 13 23:36:52.844879 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 35438 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:52.846025 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:52.851004 systemd-logind[1466]: New session 3 of user core. May 13 23:36:52.866019 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:36:52.914823 sshd[1607]: Connection closed by 10.0.0.1 port 35438 May 13 23:36:52.915126 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 13 23:36:52.926887 systemd[1]: sshd@2-10.0.0.79:22-10.0.0.1:35438.service: Deactivated successfully. May 13 23:36:52.928320 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:36:52.930282 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. May 13 23:36:52.931123 systemd[1]: Started sshd@3-10.0.0.79:22-10.0.0.1:35450.service - OpenSSH per-connection server daemon (10.0.0.1:35450). May 13 23:36:52.932085 systemd-logind[1466]: Removed session 3. May 13 23:36:52.973432 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 35450 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:52.974718 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:52.978611 systemd-logind[1466]: New session 4 of user core. May 13 23:36:52.989937 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:36:53.041683 sshd[1615]: Connection closed by 10.0.0.1 port 35450 May 13 23:36:53.042010 sshd-session[1612]: pam_unix(sshd:session): session closed for user core May 13 23:36:53.051894 systemd[1]: sshd@3-10.0.0.79:22-10.0.0.1:35450.service: Deactivated successfully. May 13 23:36:53.053382 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:36:53.054864 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. May 13 23:36:53.055978 systemd[1]: Started sshd@4-10.0.0.79:22-10.0.0.1:35460.service - OpenSSH per-connection server daemon (10.0.0.1:35460). May 13 23:36:53.057141 systemd-logind[1466]: Removed session 4. May 13 23:36:53.100865 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 35460 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:53.101913 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:53.106146 systemd-logind[1466]: New session 5 of user core. May 13 23:36:53.118976 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:36:53.176919 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:36:53.177190 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:36:53.195636 sudo[1624]: pam_unix(sudo:session): session closed for user root May 13 23:36:53.198925 sshd[1623]: Connection closed by 10.0.0.1 port 35460 May 13 23:36:53.199295 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 13 23:36:53.211898 systemd[1]: sshd@4-10.0.0.79:22-10.0.0.1:35460.service: Deactivated successfully. May 13 23:36:53.213295 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:36:53.213946 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. May 13 23:36:53.227144 systemd[1]: Started sshd@5-10.0.0.79:22-10.0.0.1:35474.service - OpenSSH per-connection server daemon (10.0.0.1:35474). May 13 23:36:53.228128 systemd-logind[1466]: Removed session 5. May 13 23:36:53.266959 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 35474 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:53.268169 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:53.272267 systemd-logind[1466]: New session 6 of user core. May 13 23:36:53.283980 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:36:53.337207 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:36:53.337490 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:36:53.340431 sudo[1634]: pam_unix(sudo:session): session closed for user root May 13 23:36:53.344761 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:36:53.345135 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:36:53.363129 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:36:53.385195 augenrules[1656]: No rules May 13 23:36:53.386275 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:36:53.386487 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:36:53.387455 sudo[1633]: pam_unix(sudo:session): session closed for user root May 13 23:36:53.388661 sshd[1632]: Connection closed by 10.0.0.1 port 35474 May 13 23:36:53.388978 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 13 23:36:53.396851 systemd[1]: sshd@5-10.0.0.79:22-10.0.0.1:35474.service: Deactivated successfully. May 13 23:36:53.398190 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:36:53.398966 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. May 13 23:36:53.409295 systemd[1]: Started sshd@6-10.0.0.79:22-10.0.0.1:35478.service - OpenSSH per-connection server daemon (10.0.0.1:35478). May 13 23:36:53.410316 systemd-logind[1466]: Removed session 6. May 13 23:36:53.448276 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 35478 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:36:53.449563 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:36:53.453862 systemd-logind[1466]: New session 7 of user core. May 13 23:36:53.469021 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:36:53.519292 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:36:53.519911 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:36:53.900059 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:36:53.900172 (dockerd)[1688]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:36:54.193893 dockerd[1688]: time="2025-05-13T23:36:54.193548465Z" level=info msg="Starting up" May 13 23:36:54.380666 dockerd[1688]: time="2025-05-13T23:36:54.380550237Z" level=info msg="Loading containers: start." May 13 23:36:54.525840 kernel: Initializing XFRM netlink socket May 13 23:36:54.618575 systemd-networkd[1400]: docker0: Link UP May 13 23:36:54.660363 dockerd[1688]: time="2025-05-13T23:36:54.660309380Z" level=info msg="Loading containers: done." May 13 23:36:54.680248 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck643671127-merged.mount: Deactivated successfully. May 13 23:36:54.683143 dockerd[1688]: time="2025-05-13T23:36:54.682422662Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:36:54.683143 dockerd[1688]: time="2025-05-13T23:36:54.682542904Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 13 23:36:54.683143 dockerd[1688]: time="2025-05-13T23:36:54.682754356Z" level=info msg="Daemon has completed initialization" May 13 23:36:54.710612 dockerd[1688]: time="2025-05-13T23:36:54.710537335Z" level=info msg="API listen on /run/docker.sock" May 13 23:36:54.710815 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:36:55.732145 containerd[1477]: time="2025-05-13T23:36:55.732093480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:36:56.445321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839082919.mount: Deactivated successfully. May 13 23:36:57.786155 containerd[1477]: time="2025-05-13T23:36:57.786108778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:57.787407 containerd[1477]: time="2025-05-13T23:36:57.787377571Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 23:36:57.788822 containerd[1477]: time="2025-05-13T23:36:57.788744009Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:57.791323 containerd[1477]: time="2025-05-13T23:36:57.791272987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:57.792894 containerd[1477]: time="2025-05-13T23:36:57.792404176Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.060265082s" May 13 23:36:57.792894 containerd[1477]: time="2025-05-13T23:36:57.792449544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 23:36:57.811362 containerd[1477]: time="2025-05-13T23:36:57.811325751Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:36:59.024572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:36:59.037075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:36:59.166713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:36:59.170926 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:36:59.377962 containerd[1477]: time="2025-05-13T23:36:59.377789028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:59.379277 containerd[1477]: time="2025-05-13T23:36:59.379169707Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 23:36:59.380559 containerd[1477]: time="2025-05-13T23:36:59.380510227Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:59.386290 containerd[1477]: time="2025-05-13T23:36:59.386068193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:36:59.386290 containerd[1477]: time="2025-05-13T23:36:59.386135364Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.574768867s" May 13 23:36:59.386290 containerd[1477]: time="2025-05-13T23:36:59.386171325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 23:36:59.404561 kubelet[1963]: E0513 23:36:59.404513 1963 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:36:59.407869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:36:59.408006 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:36:59.408548 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.5M memory peak. May 13 23:36:59.411527 containerd[1477]: time="2025-05-13T23:36:59.411489788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:37:00.473416 containerd[1477]: time="2025-05-13T23:37:00.473357271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:00.474178 containerd[1477]: time="2025-05-13T23:37:00.474138388Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 23:37:00.475526 containerd[1477]: time="2025-05-13T23:37:00.475501000Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:00.478390 containerd[1477]: time="2025-05-13T23:37:00.478357797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:00.479993 containerd[1477]: time="2025-05-13T23:37:00.479952032Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.068414905s" May 13 23:37:00.479993 containerd[1477]: time="2025-05-13T23:37:00.479991290Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 23:37:00.499233 containerd[1477]: time="2025-05-13T23:37:00.499195266Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:37:01.429995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181996166.mount: Deactivated successfully. May 13 23:37:01.631063 containerd[1477]: time="2025-05-13T23:37:01.630952960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:01.631418 containerd[1477]: time="2025-05-13T23:37:01.631326755Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 23:37:01.632320 containerd[1477]: time="2025-05-13T23:37:01.632257231Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:01.634357 containerd[1477]: time="2025-05-13T23:37:01.634320722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:01.635257 containerd[1477]: time="2025-05-13T23:37:01.635226546Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.135990823s" May 13 23:37:01.635312 containerd[1477]: time="2025-05-13T23:37:01.635263743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:37:01.656081 containerd[1477]: time="2025-05-13T23:37:01.656045098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:37:02.227249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234157355.mount: Deactivated successfully. May 13 23:37:02.895052 containerd[1477]: time="2025-05-13T23:37:02.894862264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:02.895972 containerd[1477]: time="2025-05-13T23:37:02.895927058Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 23:37:02.896711 containerd[1477]: time="2025-05-13T23:37:02.896678298Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:02.900211 containerd[1477]: time="2025-05-13T23:37:02.900162193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:02.901328 containerd[1477]: time="2025-05-13T23:37:02.901297319Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.245211561s" May 13 23:37:02.901508 containerd[1477]: time="2025-05-13T23:37:02.901406671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:37:02.920395 containerd[1477]: time="2025-05-13T23:37:02.920361131Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:37:03.353487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428292685.mount: Deactivated successfully. May 13 23:37:03.359709 containerd[1477]: time="2025-05-13T23:37:03.358915788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:03.360461 containerd[1477]: time="2025-05-13T23:37:03.360426518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 23:37:03.361731 containerd[1477]: time="2025-05-13T23:37:03.361693984Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:03.363555 containerd[1477]: time="2025-05-13T23:37:03.363503615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:03.364647 containerd[1477]: time="2025-05-13T23:37:03.364602621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 444.204688ms" May 13 23:37:03.364647 containerd[1477]: time="2025-05-13T23:37:03.364634913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 23:37:03.383052 containerd[1477]: time="2025-05-13T23:37:03.382835737Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:37:03.866707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988500308.mount: Deactivated successfully. May 13 23:37:06.412707 containerd[1477]: time="2025-05-13T23:37:06.412651502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:06.413341 containerd[1477]: time="2025-05-13T23:37:06.413279618Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 23:37:06.414002 containerd[1477]: time="2025-05-13T23:37:06.413952241Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:06.417066 containerd[1477]: time="2025-05-13T23:37:06.417021182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:06.418992 containerd[1477]: time="2025-05-13T23:37:06.418902982Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.036025464s" May 13 23:37:06.418992 containerd[1477]: time="2025-05-13T23:37:06.418947648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 23:37:09.658362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:37:09.668015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:09.780473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:09.786118 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:37:09.834821 kubelet[2190]: E0513 23:37:09.834760 2190 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:37:09.837553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:37:09.837706 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:37:09.838088 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.1M memory peak. May 13 23:37:11.681815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:11.681962 systemd[1]: kubelet.service: Consumed 136ms CPU time, 97.1M memory peak. May 13 23:37:11.692133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:11.710412 systemd[1]: Reload requested from client PID 2206 ('systemctl') (unit session-7.scope)... May 13 23:37:11.710430 systemd[1]: Reloading... May 13 23:37:11.772867 zram_generator::config[2250]: No configuration found. May 13 23:37:11.911656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:37:11.989612 systemd[1]: Reloading finished in 278 ms. May 13 23:37:12.028736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:12.033145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:12.033427 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:37:12.033648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:12.033695 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. May 13 23:37:12.036030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:12.136624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:12.141324 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:37:12.184652 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:37:12.184652 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:37:12.184652 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:37:12.185030 kubelet[2298]: I0513 23:37:12.184949 2298 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:37:12.775503 kubelet[2298]: I0513 23:37:12.775466 2298 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:37:12.777668 kubelet[2298]: I0513 23:37:12.775687 2298 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:37:12.777668 kubelet[2298]: I0513 23:37:12.775975 2298 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:37:12.816181 kubelet[2298]: E0513 23:37:12.816146 2298 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.816434 kubelet[2298]: I0513 23:37:12.816398 2298 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:37:12.837942 kubelet[2298]: I0513 23:37:12.837911 2298 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:37:12.839142 kubelet[2298]: I0513 23:37:12.839084 2298 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:37:12.839345 kubelet[2298]: I0513 23:37:12.839141 2298 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:37:12.839433 kubelet[2298]: I0513 23:37:12.839404 2298 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:37:12.839433 kubelet[2298]: I0513 23:37:12.839415 2298 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:37:12.839703 kubelet[2298]: I0513 23:37:12.839680 2298 state_mem.go:36] "Initialized new in-memory state store" May 13 23:37:12.842887 kubelet[2298]: I0513 23:37:12.842857 2298 kubelet.go:400] "Attempting to sync node with API server" May 13 23:37:12.842930 kubelet[2298]: I0513 23:37:12.842889 2298 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:37:12.843019 kubelet[2298]: I0513 23:37:12.843000 2298 kubelet.go:312] "Adding apiserver pod source" May 13 23:37:12.845251 kubelet[2298]: I0513 23:37:12.843094 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:37:12.845648 kubelet[2298]: W0513 23:37:12.845567 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.845648 kubelet[2298]: E0513 23:37:12.845627 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.846332 kubelet[2298]: I0513 23:37:12.846306 2298 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:37:12.848910 kubelet[2298]: W0513 23:37:12.846863 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.848910 kubelet[2298]: I0513 23:37:12.846902 2298 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:37:12.848910 kubelet[2298]: E0513 23:37:12.846934 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.848910 kubelet[2298]: W0513 23:37:12.847126 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:37:12.848910 kubelet[2298]: I0513 23:37:12.848234 2298 server.go:1264] "Started kubelet" May 13 23:37:12.849037 kubelet[2298]: I0513 23:37:12.848910 2298 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:37:12.850275 kubelet[2298]: I0513 23:37:12.850243 2298 server.go:455] "Adding debug handlers to kubelet server" May 13 23:37:12.851082 kubelet[2298]: I0513 23:37:12.851044 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:37:12.852396 kubelet[2298]: I0513 23:37:12.852338 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:37:12.852828 kubelet[2298]: I0513 23:37:12.852793 2298 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:37:12.858768 kubelet[2298]: I0513 23:37:12.856762 2298 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:37:12.858768 kubelet[2298]: I0513 23:37:12.857962 2298 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:37:12.858768 kubelet[2298]: E0513 23:37:12.852936 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f3a6fedc94119 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 23:37:12.848212249 +0000 UTC m=+0.703399978,LastTimestamp:2025-05-13 23:37:12.848212249 +0000 UTC m=+0.703399978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 23:37:12.858768 kubelet[2298]: E0513 23:37:12.858548 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="200ms" May 13 23:37:12.859008 kubelet[2298]: W0513 23:37:12.858859 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.859008 kubelet[2298]: E0513 23:37:12.858890 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.859092 kubelet[2298]: I0513 23:37:12.859069 2298 factory.go:221] Registration of the systemd container factory successfully May 13 23:37:12.859187 kubelet[2298]: I0513 23:37:12.859162 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:37:12.860395 kubelet[2298]: I0513 23:37:12.860368 2298 reconciler.go:26] "Reconciler: start to sync state" May 13 23:37:12.860930 kubelet[2298]: I0513 23:37:12.860842 2298 factory.go:221] Registration of the containerd container factory successfully May 13 23:37:12.861617 kubelet[2298]: E0513 23:37:12.861598 2298 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:37:12.875221 kubelet[2298]: I0513 23:37:12.875184 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:37:12.876003 kubelet[2298]: I0513 23:37:12.875972 2298 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:37:12.876003 kubelet[2298]: I0513 23:37:12.875993 2298 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:37:12.876113 kubelet[2298]: I0513 23:37:12.876014 2298 state_mem.go:36] "Initialized new in-memory state store" May 13 23:37:12.876890 kubelet[2298]: I0513 23:37:12.876548 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:37:12.876890 kubelet[2298]: I0513 23:37:12.876709 2298 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:37:12.876890 kubelet[2298]: I0513 23:37:12.876730 2298 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:37:12.876890 kubelet[2298]: E0513 23:37:12.876770 2298 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:37:12.877480 kubelet[2298]: W0513 23:37:12.877366 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.877517 kubelet[2298]: E0513 23:37:12.877482 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:12.935690 kubelet[2298]: I0513 23:37:12.935635 2298 policy_none.go:49] "None policy: Start" May 13 23:37:12.936426 kubelet[2298]: I0513 23:37:12.936399 2298 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:37:12.936426 kubelet[2298]: I0513 23:37:12.936429 2298 state_mem.go:35] "Initializing new in-memory state store" May 13 23:37:12.942364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:37:12.955576 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:37:12.957795 kubelet[2298]: I0513 23:37:12.957756 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:12.958151 kubelet[2298]: E0513 23:37:12.958123 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 13 23:37:12.959015 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:37:12.970752 kubelet[2298]: I0513 23:37:12.970719 2298 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:37:12.971300 kubelet[2298]: I0513 23:37:12.971136 2298 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:37:12.971300 kubelet[2298]: I0513 23:37:12.971249 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:37:12.972476 kubelet[2298]: E0513 23:37:12.972451 2298 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 23:37:12.977880 kubelet[2298]: I0513 23:37:12.977844 2298 topology_manager.go:215] "Topology Admit Handler" podUID="801ed48cb0a6ff3eb80dcc10ad591471" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:37:12.978950 kubelet[2298]: I0513 23:37:12.978921 2298 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:37:12.980542 kubelet[2298]: I0513 23:37:12.980523 2298 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:37:12.985147 systemd[1]: Created slice kubepods-burstable-pod801ed48cb0a6ff3eb80dcc10ad591471.slice - libcontainer container kubepods-burstable-pod801ed48cb0a6ff3eb80dcc10ad591471.slice. May 13 23:37:13.007298 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 13 23:37:13.011675 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 13 23:37:13.059794 kubelet[2298]: E0513 23:37:13.059667 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="400ms" May 13 23:37:13.061831 kubelet[2298]: I0513 23:37:13.061686 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:13.061831 kubelet[2298]: I0513 23:37:13.061723 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:13.061831 kubelet[2298]: I0513 23:37:13.061746 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:13.061831 kubelet[2298]: I0513 23:37:13.061789 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:13.061831 kubelet[2298]: I0513 23:37:13.061835 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:13.062011 kubelet[2298]: I0513 23:37:13.061874 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:13.062011 kubelet[2298]: I0513 23:37:13.061917 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:37:13.062011 kubelet[2298]: I0513 23:37:13.061955 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:13.062011 kubelet[2298]: I0513 23:37:13.061974 2298 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:13.160081 kubelet[2298]: I0513 23:37:13.160042 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:13.160478 kubelet[2298]: E0513 23:37:13.160442 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 13 23:37:13.306227 containerd[1477]: time="2025-05-13T23:37:13.306176306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:801ed48cb0a6ff3eb80dcc10ad591471,Namespace:kube-system,Attempt:0,}" May 13 23:37:13.310861 containerd[1477]: time="2025-05-13T23:37:13.310740390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 23:37:13.314524 containerd[1477]: time="2025-05-13T23:37:13.314475417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 23:37:13.460848 kubelet[2298]: E0513 23:37:13.460774 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="800ms" May 13 23:37:13.562354 kubelet[2298]: I0513 23:37:13.562256 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:13.562601 kubelet[2298]: E0513 23:37:13.562570 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 13 23:37:13.708563 kubelet[2298]: W0513 23:37:13.708513 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:13.708563 kubelet[2298]: E0513 23:37:13.708561 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:13.816933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829556923.mount: Deactivated successfully. May 13 23:37:13.822328 containerd[1477]: time="2025-05-13T23:37:13.822283550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:37:13.824946 containerd[1477]: time="2025-05-13T23:37:13.824897726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 23:37:13.825474 containerd[1477]: time="2025-05-13T23:37:13.825438542Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:37:13.826576 containerd[1477]: time="2025-05-13T23:37:13.826527003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:37:13.826779 containerd[1477]: time="2025-05-13T23:37:13.826741117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:37:13.827698 containerd[1477]: time="2025-05-13T23:37:13.827646217Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:37:13.829223 containerd[1477]: time="2025-05-13T23:37:13.828295388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:37:13.830081 containerd[1477]: time="2025-05-13T23:37:13.830045401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:37:13.833306 containerd[1477]: time="2025-05-13T23:37:13.833254709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.995929ms" May 13 23:37:13.834868 containerd[1477]: time="2025-05-13T23:37:13.834812455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.238828ms" May 13 23:37:13.837637 containerd[1477]: time="2025-05-13T23:37:13.837600007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.767757ms" May 13 23:37:13.969873 containerd[1477]: time="2025-05-13T23:37:13.969403606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:13.969873 containerd[1477]: time="2025-05-13T23:37:13.969481806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:13.969873 containerd[1477]: time="2025-05-13T23:37:13.969543073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.970517 containerd[1477]: time="2025-05-13T23:37:13.970238693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.970749 containerd[1477]: time="2025-05-13T23:37:13.970173153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:13.970749 containerd[1477]: time="2025-05-13T23:37:13.970222158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:13.970749 containerd[1477]: time="2025-05-13T23:37:13.970236896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.970898 containerd[1477]: time="2025-05-13T23:37:13.970065716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:13.970898 containerd[1477]: time="2025-05-13T23:37:13.970117797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:13.971055 containerd[1477]: time="2025-05-13T23:37:13.970607051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.971055 containerd[1477]: time="2025-05-13T23:37:13.970761935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.971786 containerd[1477]: time="2025-05-13T23:37:13.971565111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:13.984127 kubelet[2298]: W0513 23:37:13.984003 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:13.984127 kubelet[2298]: E0513 23:37:13.984071 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:14.000049 systemd[1]: Started cri-containerd-10dc23824361c21489fe839e4c2347f84925d7483475b5fe1c9fe4c4bc2c4c79.scope - libcontainer container 10dc23824361c21489fe839e4c2347f84925d7483475b5fe1c9fe4c4bc2c4c79. May 13 23:37:14.001255 systemd[1]: Started cri-containerd-534ed33dc661d5de777831f794b01ece9e3764340aef33f2bbc9b80114af683d.scope - libcontainer container 534ed33dc661d5de777831f794b01ece9e3764340aef33f2bbc9b80114af683d. May 13 23:37:14.002282 systemd[1]: Started cri-containerd-c81f69ec99fac680e92987f53462cbed8c0124fc4929e8e14ca979fa15edac3f.scope - libcontainer container c81f69ec99fac680e92987f53462cbed8c0124fc4929e8e14ca979fa15edac3f. May 13 23:37:14.035556 containerd[1477]: time="2025-05-13T23:37:14.035427305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"10dc23824361c21489fe839e4c2347f84925d7483475b5fe1c9fe4c4bc2c4c79\"" May 13 23:37:14.037278 containerd[1477]: time="2025-05-13T23:37:14.037235334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"534ed33dc661d5de777831f794b01ece9e3764340aef33f2bbc9b80114af683d\"" May 13 23:37:14.041696 containerd[1477]: time="2025-05-13T23:37:14.041655001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:801ed48cb0a6ff3eb80dcc10ad591471,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81f69ec99fac680e92987f53462cbed8c0124fc4929e8e14ca979fa15edac3f\"" May 13 23:37:14.041845 containerd[1477]: time="2025-05-13T23:37:14.041818463Z" level=info msg="CreateContainer within sandbox \"534ed33dc661d5de777831f794b01ece9e3764340aef33f2bbc9b80114af683d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:37:14.042236 containerd[1477]: time="2025-05-13T23:37:14.041924322Z" level=info msg="CreateContainer within sandbox \"10dc23824361c21489fe839e4c2347f84925d7483475b5fe1c9fe4c4bc2c4c79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:37:14.047421 containerd[1477]: time="2025-05-13T23:37:14.047303630Z" level=info msg="CreateContainer within sandbox \"c81f69ec99fac680e92987f53462cbed8c0124fc4929e8e14ca979fa15edac3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:37:14.059287 containerd[1477]: time="2025-05-13T23:37:14.059230607Z" level=info msg="CreateContainer within sandbox \"534ed33dc661d5de777831f794b01ece9e3764340aef33f2bbc9b80114af683d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"64b82e5b395de3a6b8313c6b027401a5a37981194cdf65e94f403c84aaa0a560\"" May 13 23:37:14.059995 containerd[1477]: time="2025-05-13T23:37:14.059959395Z" level=info msg="StartContainer for \"64b82e5b395de3a6b8313c6b027401a5a37981194cdf65e94f403c84aaa0a560\"" May 13 23:37:14.063727 containerd[1477]: time="2025-05-13T23:37:14.063680594Z" level=info msg="CreateContainer within sandbox \"10dc23824361c21489fe839e4c2347f84925d7483475b5fe1c9fe4c4bc2c4c79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"918b17c7b6495fcecc7f5c4444653431fba8b161c6af6081b7556ab57fb20aab\"" May 13 23:37:14.065337 containerd[1477]: time="2025-05-13T23:37:14.065274908Z" level=info msg="StartContainer for \"918b17c7b6495fcecc7f5c4444653431fba8b161c6af6081b7556ab57fb20aab\"" May 13 23:37:14.065688 containerd[1477]: time="2025-05-13T23:37:14.065651206Z" level=info msg="CreateContainer within sandbox \"c81f69ec99fac680e92987f53462cbed8c0124fc4929e8e14ca979fa15edac3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7817341a9c447e6c6369b053a8e8a09dfe1dc88d48147684a60058f1d482b31\"" May 13 23:37:14.066829 containerd[1477]: time="2025-05-13T23:37:14.066045561Z" level=info msg="StartContainer for \"f7817341a9c447e6c6369b053a8e8a09dfe1dc88d48147684a60058f1d482b31\"" May 13 23:37:14.087036 systemd[1]: Started cri-containerd-64b82e5b395de3a6b8313c6b027401a5a37981194cdf65e94f403c84aaa0a560.scope - libcontainer container 64b82e5b395de3a6b8313c6b027401a5a37981194cdf65e94f403c84aaa0a560. May 13 23:37:14.091123 systemd[1]: Started cri-containerd-918b17c7b6495fcecc7f5c4444653431fba8b161c6af6081b7556ab57fb20aab.scope - libcontainer container 918b17c7b6495fcecc7f5c4444653431fba8b161c6af6081b7556ab57fb20aab. May 13 23:37:14.094614 systemd[1]: Started cri-containerd-f7817341a9c447e6c6369b053a8e8a09dfe1dc88d48147684a60058f1d482b31.scope - libcontainer container f7817341a9c447e6c6369b053a8e8a09dfe1dc88d48147684a60058f1d482b31. May 13 23:37:14.172739 kubelet[2298]: W0513 23:37:14.168996 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:14.172739 kubelet[2298]: E0513 23:37:14.169061 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:14.200348 containerd[1477]: time="2025-05-13T23:37:14.199838411Z" level=info msg="StartContainer for \"64b82e5b395de3a6b8313c6b027401a5a37981194cdf65e94f403c84aaa0a560\" returns successfully" May 13 23:37:14.200348 containerd[1477]: time="2025-05-13T23:37:14.199984896Z" level=info msg="StartContainer for \"918b17c7b6495fcecc7f5c4444653431fba8b161c6af6081b7556ab57fb20aab\" returns successfully" May 13 23:37:14.200348 containerd[1477]: time="2025-05-13T23:37:14.200009902Z" level=info msg="StartContainer for \"f7817341a9c447e6c6369b053a8e8a09dfe1dc88d48147684a60058f1d482b31\" returns successfully" May 13 23:37:14.262059 kubelet[2298]: E0513 23:37:14.261997 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.79:6443: connect: connection refused" interval="1.6s" May 13 23:37:14.273510 kubelet[2298]: W0513 23:37:14.273010 2298 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:14.273510 kubelet[2298]: E0513 23:37:14.273085 2298 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.79:6443: connect: connection refused May 13 23:37:14.365370 kubelet[2298]: I0513 23:37:14.365255 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:14.366040 kubelet[2298]: E0513 23:37:14.365954 2298 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.79:6443/api/v1/nodes\": dial tcp 10.0.0.79:6443: connect: connection refused" node="localhost" May 13 23:37:15.908452 kubelet[2298]: E0513 23:37:15.908402 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 23:37:15.967189 kubelet[2298]: I0513 23:37:15.967119 2298 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:15.981209 kubelet[2298]: I0513 23:37:15.981156 2298 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:37:15.995867 kubelet[2298]: E0513 23:37:15.995830 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.096882 kubelet[2298]: E0513 23:37:16.096837 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.197450 kubelet[2298]: E0513 23:37:16.197334 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.298364 kubelet[2298]: E0513 23:37:16.298318 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.399413 kubelet[2298]: E0513 23:37:16.399369 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.499970 kubelet[2298]: E0513 23:37:16.499859 2298 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 23:37:16.849939 kubelet[2298]: I0513 23:37:16.849826 2298 apiserver.go:52] "Watching apiserver" May 13 23:37:16.858860 kubelet[2298]: I0513 23:37:16.858750 2298 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:37:18.100141 systemd[1]: Reload requested from client PID 2577 ('systemctl') (unit session-7.scope)... May 13 23:37:18.100159 systemd[1]: Reloading... May 13 23:37:18.170959 zram_generator::config[2621]: No configuration found. May 13 23:37:18.266109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:37:18.360918 systemd[1]: Reloading finished in 260 ms. May 13 23:37:18.383706 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:18.396274 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:37:18.396503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:18.396553 systemd[1]: kubelet.service: Consumed 1.110s CPU time, 115.7M memory peak. May 13 23:37:18.404210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:37:18.508509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:37:18.513140 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:37:18.561722 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:37:18.561722 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:37:18.561722 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:37:18.562117 kubelet[2663]: I0513 23:37:18.561748 2663 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:37:18.566332 kubelet[2663]: I0513 23:37:18.565950 2663 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:37:18.566332 kubelet[2663]: I0513 23:37:18.565979 2663 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:37:18.566332 kubelet[2663]: I0513 23:37:18.566168 2663 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:37:18.567620 kubelet[2663]: I0513 23:37:18.567594 2663 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:37:18.568949 kubelet[2663]: I0513 23:37:18.568914 2663 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:37:18.574116 kubelet[2663]: I0513 23:37:18.574084 2663 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:37:18.574288 kubelet[2663]: I0513 23:37:18.574258 2663 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:37:18.574475 kubelet[2663]: I0513 23:37:18.574289 2663 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:37:18.574555 kubelet[2663]: I0513 23:37:18.574484 2663 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:37:18.574555 kubelet[2663]: I0513 23:37:18.574493 2663 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:37:18.574555 kubelet[2663]: I0513 23:37:18.574526 2663 state_mem.go:36] "Initialized new in-memory state store" May 13 23:37:18.574648 kubelet[2663]: I0513 23:37:18.574638 2663 kubelet.go:400] "Attempting to sync node with API server" May 13 23:37:18.574675 kubelet[2663]: I0513 23:37:18.574651 2663 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:37:18.574699 kubelet[2663]: I0513 23:37:18.574680 2663 kubelet.go:312] "Adding apiserver pod source" May 13 23:37:18.574699 kubelet[2663]: I0513 23:37:18.574695 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:37:18.575782 kubelet[2663]: I0513 23:37:18.575747 2663 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:37:18.576001 kubelet[2663]: I0513 23:37:18.575961 2663 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:37:18.576590 kubelet[2663]: I0513 23:37:18.576423 2663 server.go:1264] "Started kubelet" May 13 23:37:18.580593 kubelet[2663]: I0513 23:37:18.577318 2663 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:37:18.583812 kubelet[2663]: I0513 23:37:18.581152 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:37:18.583812 kubelet[2663]: I0513 23:37:18.581396 2663 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:37:18.583812 kubelet[2663]: I0513 23:37:18.582907 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:37:18.588598 kubelet[2663]: I0513 23:37:18.588563 2663 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:37:18.589035 kubelet[2663]: I0513 23:37:18.589014 2663 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:37:18.589190 kubelet[2663]: I0513 23:37:18.589175 2663 reconciler.go:26] "Reconciler: start to sync state" May 13 23:37:18.594843 kubelet[2663]: I0513 23:37:18.594771 2663 server.go:455] "Adding debug handlers to kubelet server" May 13 23:37:18.596336 kubelet[2663]: I0513 23:37:18.596299 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:37:18.600415 kubelet[2663]: E0513 23:37:18.600252 2663 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:37:18.601754 kubelet[2663]: I0513 23:37:18.601010 2663 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:37:18.601754 kubelet[2663]: I0513 23:37:18.601048 2663 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:37:18.601754 kubelet[2663]: I0513 23:37:18.601066 2663 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:37:18.601754 kubelet[2663]: E0513 23:37:18.601112 2663 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:37:18.602772 kubelet[2663]: I0513 23:37:18.602205 2663 factory.go:221] Registration of the containerd container factory successfully May 13 23:37:18.602772 kubelet[2663]: I0513 23:37:18.602228 2663 factory.go:221] Registration of the systemd container factory successfully May 13 23:37:18.602772 kubelet[2663]: I0513 23:37:18.602316 2663 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:37:18.632869 kubelet[2663]: I0513 23:37:18.632751 2663 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:37:18.632869 kubelet[2663]: I0513 23:37:18.632779 2663 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:37:18.632869 kubelet[2663]: I0513 23:37:18.632816 2663 state_mem.go:36] "Initialized new in-memory state store" May 13 23:37:18.633016 kubelet[2663]: I0513 23:37:18.632971 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:37:18.633016 kubelet[2663]: I0513 23:37:18.632982 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:37:18.633192 kubelet[2663]: I0513 23:37:18.633001 2663 policy_none.go:49] "None policy: Start" May 13 23:37:18.634721 kubelet[2663]: I0513 23:37:18.634672 2663 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:37:18.634721 kubelet[2663]: I0513 23:37:18.634705 2663 state_mem.go:35] "Initializing new in-memory state store" May 13 23:37:18.634942 kubelet[2663]: I0513 23:37:18.634920 2663 state_mem.go:75] "Updated machine memory state" May 13 23:37:18.639218 kubelet[2663]: I0513 23:37:18.639108 2663 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:37:18.639464 kubelet[2663]: I0513 23:37:18.639324 2663 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:37:18.639464 kubelet[2663]: I0513 23:37:18.639446 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:37:18.691069 kubelet[2663]: I0513 23:37:18.691037 2663 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 23:37:18.697619 kubelet[2663]: I0513 23:37:18.697437 2663 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 23:37:18.697619 kubelet[2663]: I0513 23:37:18.697538 2663 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 23:37:18.701675 kubelet[2663]: I0513 23:37:18.701609 2663 topology_manager.go:215] "Topology Admit Handler" podUID="801ed48cb0a6ff3eb80dcc10ad591471" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 23:37:18.701851 kubelet[2663]: I0513 23:37:18.701722 2663 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 23:37:18.701851 kubelet[2663]: I0513 23:37:18.701761 2663 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 23:37:18.707114 kubelet[2663]: E0513 23:37:18.707065 2663 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 23:37:18.707999 kubelet[2663]: E0513 23:37:18.707963 2663 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891393 kubelet[2663]: I0513 23:37:18.891255 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891393 kubelet[2663]: I0513 23:37:18.891321 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891535 kubelet[2663]: I0513 23:37:18.891387 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:18.891535 kubelet[2663]: I0513 23:37:18.891448 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891535 kubelet[2663]: I0513 23:37:18.891487 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891535 kubelet[2663]: I0513 23:37:18.891509 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:18.891535 kubelet[2663]: I0513 23:37:18.891530 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/801ed48cb0a6ff3eb80dcc10ad591471-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"801ed48cb0a6ff3eb80dcc10ad591471\") " pod="kube-system/kube-apiserver-localhost" May 13 23:37:18.891639 kubelet[2663]: I0513 23:37:18.891547 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 23:37:18.891639 kubelet[2663]: I0513 23:37:18.891586 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 23:37:19.105142 sudo[2695]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 23:37:19.108609 sudo[2695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 23:37:19.558032 sudo[2695]: pam_unix(sudo:session): session closed for user root May 13 23:37:19.575611 kubelet[2663]: I0513 23:37:19.575328 2663 apiserver.go:52] "Watching apiserver" May 13 23:37:19.589327 kubelet[2663]: I0513 23:37:19.589286 2663 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:37:19.650006 kubelet[2663]: I0513 23:37:19.649404 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.649385755 podStartE2EDuration="2.649385755s" podCreationTimestamp="2025-05-13 23:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:19.640653044 +0000 UTC m=+1.124230203" watchObservedRunningTime="2025-05-13 23:37:19.649385755 +0000 UTC m=+1.132962914" May 13 23:37:19.664428 kubelet[2663]: I0513 23:37:19.663779 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.663761859 podStartE2EDuration="1.663761859s" podCreationTimestamp="2025-05-13 23:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:19.650054577 +0000 UTC m=+1.133631737" watchObservedRunningTime="2025-05-13 23:37:19.663761859 +0000 UTC m=+1.147339018" May 13 23:37:19.664428 kubelet[2663]: I0513 23:37:19.663907 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.663901882 podStartE2EDuration="2.663901882s" podCreationTimestamp="2025-05-13 23:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:19.663394253 +0000 UTC m=+1.146971892" watchObservedRunningTime="2025-05-13 23:37:19.663901882 +0000 UTC m=+1.147479041" May 13 23:37:21.329020 sudo[1669]: pam_unix(sudo:session): session closed for user root May 13 23:37:21.330900 sshd[1668]: Connection closed by 10.0.0.1 port 35478 May 13 23:37:21.330919 sshd-session[1664]: pam_unix(sshd:session): session closed for user core May 13 23:37:21.334180 systemd[1]: sshd@6-10.0.0.79:22-10.0.0.1:35478.service: Deactivated successfully. May 13 23:37:21.336215 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:37:21.336393 systemd[1]: session-7.scope: Consumed 7.713s CPU time, 289.9M memory peak. May 13 23:37:21.341091 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. May 13 23:37:21.342416 systemd-logind[1466]: Removed session 7. May 13 23:37:32.069289 update_engine[1468]: I20250513 23:37:32.069213 1468 update_attempter.cc:509] Updating boot flags... May 13 23:37:32.104852 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2745) May 13 23:37:32.145833 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2748) May 13 23:37:33.653289 kubelet[2663]: I0513 23:37:33.653249 2663 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:37:33.661754 containerd[1477]: time="2025-05-13T23:37:33.661713683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:37:33.662045 kubelet[2663]: I0513 23:37:33.661896 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:37:34.664361 kubelet[2663]: I0513 23:37:34.664302 2663 topology_manager.go:215] "Topology Admit Handler" podUID="e6b9e362-67bc-4b3a-a844-6714e8f7a42f" podNamespace="kube-system" podName="kube-proxy-9vklp" May 13 23:37:34.681499 systemd[1]: Created slice kubepods-besteffort-pode6b9e362_67bc_4b3a_a844_6714e8f7a42f.slice - libcontainer container kubepods-besteffort-pode6b9e362_67bc_4b3a_a844_6714e8f7a42f.slice. May 13 23:37:34.683959 kubelet[2663]: I0513 23:37:34.683909 2663 topology_manager.go:215] "Topology Admit Handler" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" podNamespace="kube-system" podName="cilium-djxsf" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.695983 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cni-path\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.696022 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6b9e362-67bc-4b3a-a844-6714e8f7a42f-lib-modules\") pod \"kube-proxy-9vklp\" (UID: \"e6b9e362-67bc-4b3a-a844-6714e8f7a42f\") " pod="kube-system/kube-proxy-9vklp" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.696040 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-bpf-maps\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.696057 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-kernel\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.696074 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hubble-tls\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.696618 kubelet[2663]: I0513 23:37:34.696099 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-etc-cni-netd\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696115 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71801b2a-dfcb-4f3f-b674-faa86afb2f51-clustermesh-secrets\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696132 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6b9e362-67bc-4b3a-a844-6714e8f7a42f-kube-proxy\") pod \"kube-proxy-9vklp\" (UID: \"e6b9e362-67bc-4b3a-a844-6714e8f7a42f\") " pod="kube-system/kube-proxy-9vklp" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696148 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-run\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696163 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-net\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696181 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gktsf\" (UniqueName: \"kubernetes.io/projected/e6b9e362-67bc-4b3a-a844-6714e8f7a42f-kube-api-access-gktsf\") pod \"kube-proxy-9vklp\" (UID: \"e6b9e362-67bc-4b3a-a844-6714e8f7a42f\") " pod="kube-system/kube-proxy-9vklp" May 13 23:37:34.697096 kubelet[2663]: I0513 23:37:34.696196 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hostproc\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697223 kubelet[2663]: I0513 23:37:34.696213 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-cgroup\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697223 kubelet[2663]: I0513 23:37:34.696229 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-lib-modules\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697705 kubelet[2663]: I0513 23:37:34.696243 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-xtables-lock\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.697705 kubelet[2663]: I0513 23:37:34.697597 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-config-path\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.698091 kubelet[2663]: I0513 23:37:34.697979 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlxj\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-kube-api-access-5xlxj\") pod \"cilium-djxsf\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " pod="kube-system/cilium-djxsf" May 13 23:37:34.698917 kubelet[2663]: I0513 23:37:34.698876 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6b9e362-67bc-4b3a-a844-6714e8f7a42f-xtables-lock\") pod \"kube-proxy-9vklp\" (UID: \"e6b9e362-67bc-4b3a-a844-6714e8f7a42f\") " pod="kube-system/kube-proxy-9vklp" May 13 23:37:34.704126 systemd[1]: Created slice kubepods-burstable-pod71801b2a_dfcb_4f3f_b674_faa86afb2f51.slice - libcontainer container kubepods-burstable-pod71801b2a_dfcb_4f3f_b674_faa86afb2f51.slice. May 13 23:37:34.869326 kubelet[2663]: I0513 23:37:34.869161 2663 topology_manager.go:215] "Topology Admit Handler" podUID="b2c3947a-49a2-445d-8ed7-8eafae13a043" podNamespace="kube-system" podName="cilium-operator-599987898-fh8v4" May 13 23:37:34.874223 systemd[1]: Created slice kubepods-besteffort-podb2c3947a_49a2_445d_8ed7_8eafae13a043.slice - libcontainer container kubepods-besteffort-podb2c3947a_49a2_445d_8ed7_8eafae13a043.slice. May 13 23:37:34.901014 kubelet[2663]: I0513 23:37:34.900976 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c3947a-49a2-445d-8ed7-8eafae13a043-cilium-config-path\") pod \"cilium-operator-599987898-fh8v4\" (UID: \"b2c3947a-49a2-445d-8ed7-8eafae13a043\") " pod="kube-system/cilium-operator-599987898-fh8v4" May 13 23:37:34.901150 kubelet[2663]: I0513 23:37:34.901032 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55bl4\" (UniqueName: \"kubernetes.io/projected/b2c3947a-49a2-445d-8ed7-8eafae13a043-kube-api-access-55bl4\") pod \"cilium-operator-599987898-fh8v4\" (UID: \"b2c3947a-49a2-445d-8ed7-8eafae13a043\") " pod="kube-system/cilium-operator-599987898-fh8v4" May 13 23:37:35.005834 containerd[1477]: time="2025-05-13T23:37:35.005090571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vklp,Uid:e6b9e362-67bc-4b3a-a844-6714e8f7a42f,Namespace:kube-system,Attempt:0,}" May 13 23:37:35.008035 containerd[1477]: time="2025-05-13T23:37:35.007848065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-djxsf,Uid:71801b2a-dfcb-4f3f-b674-faa86afb2f51,Namespace:kube-system,Attempt:0,}" May 13 23:37:35.111936 containerd[1477]: time="2025-05-13T23:37:35.110764661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:35.111936 containerd[1477]: time="2025-05-13T23:37:35.110894246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:35.111936 containerd[1477]: time="2025-05-13T23:37:35.110920931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.111936 containerd[1477]: time="2025-05-13T23:37:35.111371458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.114146 containerd[1477]: time="2025-05-13T23:37:35.114036814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:35.114146 containerd[1477]: time="2025-05-13T23:37:35.114092345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:35.114146 containerd[1477]: time="2025-05-13T23:37:35.114104267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.114399 containerd[1477]: time="2025-05-13T23:37:35.114180362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.141032 systemd[1]: Started cri-containerd-89e81eebbafa7bad5142e01017babd81b938c7be428b596b495f36722c0a10cb.scope - libcontainer container 89e81eebbafa7bad5142e01017babd81b938c7be428b596b495f36722c0a10cb. May 13 23:37:35.144475 systemd[1]: Started cri-containerd-0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb.scope - libcontainer container 0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb. May 13 23:37:35.173363 containerd[1477]: time="2025-05-13T23:37:35.173313125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vklp,Uid:e6b9e362-67bc-4b3a-a844-6714e8f7a42f,Namespace:kube-system,Attempt:0,} returns sandbox id \"89e81eebbafa7bad5142e01017babd81b938c7be428b596b495f36722c0a10cb\"" May 13 23:37:35.177400 containerd[1477]: time="2025-05-13T23:37:35.177088895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fh8v4,Uid:b2c3947a-49a2-445d-8ed7-8eafae13a043,Namespace:kube-system,Attempt:0,}" May 13 23:37:35.178485 containerd[1477]: time="2025-05-13T23:37:35.178445798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-djxsf,Uid:71801b2a-dfcb-4f3f-b674-faa86afb2f51,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\"" May 13 23:37:35.181332 containerd[1477]: time="2025-05-13T23:37:35.181196850Z" level=info msg="CreateContainer within sandbox \"89e81eebbafa7bad5142e01017babd81b938c7be428b596b495f36722c0a10cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:37:35.181884 containerd[1477]: time="2025-05-13T23:37:35.181778523Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 23:37:35.209404 containerd[1477]: time="2025-05-13T23:37:35.209105811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:35.209404 containerd[1477]: time="2025-05-13T23:37:35.209167463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:35.209404 containerd[1477]: time="2025-05-13T23:37:35.209183066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.209404 containerd[1477]: time="2025-05-13T23:37:35.209276324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:35.220250 containerd[1477]: time="2025-05-13T23:37:35.220130345Z" level=info msg="CreateContainer within sandbox \"89e81eebbafa7bad5142e01017babd81b938c7be428b596b495f36722c0a10cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91dbe76130cf2def85d01373953bac645593334ad1d0ae013a5c6c2a2488f9da\"" May 13 23:37:35.225259 containerd[1477]: time="2025-05-13T23:37:35.225211328Z" level=info msg="StartContainer for \"91dbe76130cf2def85d01373953bac645593334ad1d0ae013a5c6c2a2488f9da\"" May 13 23:37:35.227210 systemd[1]: Started cri-containerd-e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741.scope - libcontainer container e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741. May 13 23:37:35.257053 systemd[1]: Started cri-containerd-91dbe76130cf2def85d01373953bac645593334ad1d0ae013a5c6c2a2488f9da.scope - libcontainer container 91dbe76130cf2def85d01373953bac645593334ad1d0ae013a5c6c2a2488f9da. May 13 23:37:35.261290 containerd[1477]: time="2025-05-13T23:37:35.261232979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fh8v4,Uid:b2c3947a-49a2-445d-8ed7-8eafae13a043,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\"" May 13 23:37:35.288566 containerd[1477]: time="2025-05-13T23:37:35.288515378Z" level=info msg="StartContainer for \"91dbe76130cf2def85d01373953bac645593334ad1d0ae013a5c6c2a2488f9da\" returns successfully" May 13 23:37:38.615091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605760558.mount: Deactivated successfully. May 13 23:37:41.063572 containerd[1477]: time="2025-05-13T23:37:41.063517449Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:41.063989 containerd[1477]: time="2025-05-13T23:37:41.063926389Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 23:37:41.064746 containerd[1477]: time="2025-05-13T23:37:41.064714785Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:41.066305 containerd[1477]: time="2025-05-13T23:37:41.066279576Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.884427119s" May 13 23:37:41.066362 containerd[1477]: time="2025-05-13T23:37:41.066311101Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 23:37:41.069709 containerd[1477]: time="2025-05-13T23:37:41.068855116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 23:37:41.070607 containerd[1477]: time="2025-05-13T23:37:41.070578851Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:37:41.100085 containerd[1477]: time="2025-05-13T23:37:41.099232478Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\"" May 13 23:37:41.099506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129109290.mount: Deactivated successfully. May 13 23:37:41.100844 containerd[1477]: time="2025-05-13T23:37:41.100814991Z" level=info msg="StartContainer for \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\"" May 13 23:37:41.132011 systemd[1]: Started cri-containerd-19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82.scope - libcontainer container 19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82. May 13 23:37:41.160225 containerd[1477]: time="2025-05-13T23:37:41.160067773Z" level=info msg="StartContainer for \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\" returns successfully" May 13 23:37:41.191797 systemd[1]: cri-containerd-19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82.scope: Deactivated successfully. May 13 23:37:41.249644 containerd[1477]: time="2025-05-13T23:37:41.240024569Z" level=info msg="shim disconnected" id=19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82 namespace=k8s.io May 13 23:37:41.249644 containerd[1477]: time="2025-05-13T23:37:41.249533332Z" level=warning msg="cleaning up after shim disconnected" id=19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82 namespace=k8s.io May 13 23:37:41.249644 containerd[1477]: time="2025-05-13T23:37:41.249550615Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:37:41.669488 containerd[1477]: time="2025-05-13T23:37:41.669433002Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:37:41.693549 containerd[1477]: time="2025-05-13T23:37:41.693501153Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\"" May 13 23:37:41.694225 containerd[1477]: time="2025-05-13T23:37:41.694199055Z" level=info msg="StartContainer for \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\"" May 13 23:37:41.702293 kubelet[2663]: I0513 23:37:41.702218 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vklp" podStartSLOduration=7.702203596 podStartE2EDuration="7.702203596s" podCreationTimestamp="2025-05-13 23:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:35.663494942 +0000 UTC m=+17.147072101" watchObservedRunningTime="2025-05-13 23:37:41.702203596 +0000 UTC m=+23.185780755" May 13 23:37:41.736042 systemd[1]: Started cri-containerd-5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f.scope - libcontainer container 5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f. May 13 23:37:41.758777 containerd[1477]: time="2025-05-13T23:37:41.758610558Z" level=info msg="StartContainer for \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\" returns successfully" May 13 23:37:41.778539 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:37:41.779028 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:37:41.779193 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 23:37:41.786218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:37:41.786420 systemd[1]: cri-containerd-5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f.scope: Deactivated successfully. May 13 23:37:41.810630 containerd[1477]: time="2025-05-13T23:37:41.810550941Z" level=info msg="shim disconnected" id=5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f namespace=k8s.io May 13 23:37:41.810630 containerd[1477]: time="2025-05-13T23:37:41.810623712Z" level=warning msg="cleaning up after shim disconnected" id=5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f namespace=k8s.io May 13 23:37:41.810630 containerd[1477]: time="2025-05-13T23:37:41.810632353Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:37:41.827019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:37:42.097568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82-rootfs.mount: Deactivated successfully. May 13 23:37:42.325503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161061634.mount: Deactivated successfully. May 13 23:37:42.591246 containerd[1477]: time="2025-05-13T23:37:42.591201766Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:42.592451 containerd[1477]: time="2025-05-13T23:37:42.592380572Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 23:37:42.593332 containerd[1477]: time="2025-05-13T23:37:42.593302743Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:37:42.594681 containerd[1477]: time="2025-05-13T23:37:42.594643692Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.52506727s" May 13 23:37:42.594681 containerd[1477]: time="2025-05-13T23:37:42.594681458Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 23:37:42.598008 containerd[1477]: time="2025-05-13T23:37:42.597969243Z" level=info msg="CreateContainer within sandbox \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 23:37:42.611700 containerd[1477]: time="2025-05-13T23:37:42.611648458Z" level=info msg="CreateContainer within sandbox \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\"" May 13 23:37:42.612387 containerd[1477]: time="2025-05-13T23:37:42.612355398Z" level=info msg="StartContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\"" May 13 23:37:42.637997 systemd[1]: Started cri-containerd-c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3.scope - libcontainer container c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3. May 13 23:37:42.704079 containerd[1477]: time="2025-05-13T23:37:42.704024765Z" level=info msg="StartContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" returns successfully" May 13 23:37:42.721743 containerd[1477]: time="2025-05-13T23:37:42.721692824Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:37:42.735045 kubelet[2663]: I0513 23:37:42.734989 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fh8v4" podStartSLOduration=1.402703772 podStartE2EDuration="8.734974623s" podCreationTimestamp="2025-05-13 23:37:34 +0000 UTC" firstStartedPulling="2025-05-13 23:37:35.263194758 +0000 UTC m=+16.746771917" lastFinishedPulling="2025-05-13 23:37:42.595465649 +0000 UTC m=+24.079042768" observedRunningTime="2025-05-13 23:37:42.71692515 +0000 UTC m=+24.200502309" watchObservedRunningTime="2025-05-13 23:37:42.734974623 +0000 UTC m=+24.218551782" May 13 23:37:42.737576 containerd[1477]: time="2025-05-13T23:37:42.737537906Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\"" May 13 23:37:42.738965 containerd[1477]: time="2025-05-13T23:37:42.738933343Z" level=info msg="StartContainer for \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\"" May 13 23:37:42.776014 systemd[1]: Started cri-containerd-9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5.scope - libcontainer container 9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5. May 13 23:37:42.834302 containerd[1477]: time="2025-05-13T23:37:42.834255987Z" level=info msg="StartContainer for \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\" returns successfully" May 13 23:37:42.849679 systemd[1]: cri-containerd-9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5.scope: Deactivated successfully. May 13 23:37:42.892849 containerd[1477]: time="2025-05-13T23:37:42.892768304Z" level=info msg="shim disconnected" id=9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5 namespace=k8s.io May 13 23:37:42.892849 containerd[1477]: time="2025-05-13T23:37:42.892842835Z" level=warning msg="cleaning up after shim disconnected" id=9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5 namespace=k8s.io May 13 23:37:42.892849 containerd[1477]: time="2025-05-13T23:37:42.892854476Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:37:43.713746 containerd[1477]: time="2025-05-13T23:37:43.713571788Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:37:43.774570 containerd[1477]: time="2025-05-13T23:37:43.774523543Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\"" May 13 23:37:43.775010 containerd[1477]: time="2025-05-13T23:37:43.774991086Z" level=info msg="StartContainer for \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\"" May 13 23:37:43.804958 systemd[1]: Started cri-containerd-9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15.scope - libcontainer container 9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15. May 13 23:37:43.824124 systemd[1]: cri-containerd-9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15.scope: Deactivated successfully. May 13 23:37:43.827333 containerd[1477]: time="2025-05-13T23:37:43.827294667Z" level=info msg="StartContainer for \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\" returns successfully" May 13 23:37:43.847407 containerd[1477]: time="2025-05-13T23:37:43.847349910Z" level=info msg="shim disconnected" id=9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15 namespace=k8s.io May 13 23:37:43.847832 containerd[1477]: time="2025-05-13T23:37:43.847660672Z" level=warning msg="cleaning up after shim disconnected" id=9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15 namespace=k8s.io May 13 23:37:43.847832 containerd[1477]: time="2025-05-13T23:37:43.847679554Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:37:44.097608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15-rootfs.mount: Deactivated successfully. May 13 23:37:44.722306 containerd[1477]: time="2025-05-13T23:37:44.722186223Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:37:44.740670 containerd[1477]: time="2025-05-13T23:37:44.740598824Z" level=info msg="CreateContainer within sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\"" May 13 23:37:44.742180 containerd[1477]: time="2025-05-13T23:37:44.741307277Z" level=info msg="StartContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\"" May 13 23:37:44.778358 systemd[1]: Started cri-containerd-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b.scope - libcontainer container bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b. May 13 23:37:44.807869 containerd[1477]: time="2025-05-13T23:37:44.806003434Z" level=info msg="StartContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" returns successfully" May 13 23:37:45.000997 kubelet[2663]: I0513 23:37:45.000915 2663 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:37:45.025076 kubelet[2663]: I0513 23:37:45.025024 2663 topology_manager.go:215] "Topology Admit Handler" podUID="d05d2301-6689-4f9a-a424-5d7992fbe3dc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hgpcc" May 13 23:37:45.033710 kubelet[2663]: I0513 23:37:45.033129 2663 topology_manager.go:215] "Topology Admit Handler" podUID="378779b0-9a81-4e11-8cf3-b243afedec34" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mn8n6" May 13 23:37:45.038678 systemd[1]: Created slice kubepods-burstable-podd05d2301_6689_4f9a_a424_5d7992fbe3dc.slice - libcontainer container kubepods-burstable-podd05d2301_6689_4f9a_a424_5d7992fbe3dc.slice. May 13 23:37:45.046220 systemd[1]: Created slice kubepods-burstable-pod378779b0_9a81_4e11_8cf3_b243afedec34.slice - libcontainer container kubepods-burstable-pod378779b0_9a81_4e11_8cf3_b243afedec34.slice. May 13 23:37:45.097665 systemd[1]: run-containerd-runc-k8s.io-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b-runc.1WMNhe.mount: Deactivated successfully. May 13 23:37:45.176617 kubelet[2663]: I0513 23:37:45.176574 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d05d2301-6689-4f9a-a424-5d7992fbe3dc-config-volume\") pod \"coredns-7db6d8ff4d-hgpcc\" (UID: \"d05d2301-6689-4f9a-a424-5d7992fbe3dc\") " pod="kube-system/coredns-7db6d8ff4d-hgpcc" May 13 23:37:45.176763 kubelet[2663]: I0513 23:37:45.176641 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kqjf\" (UniqueName: \"kubernetes.io/projected/378779b0-9a81-4e11-8cf3-b243afedec34-kube-api-access-9kqjf\") pod \"coredns-7db6d8ff4d-mn8n6\" (UID: \"378779b0-9a81-4e11-8cf3-b243afedec34\") " pod="kube-system/coredns-7db6d8ff4d-mn8n6" May 13 23:37:45.176763 kubelet[2663]: I0513 23:37:45.176665 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zch9h\" (UniqueName: \"kubernetes.io/projected/d05d2301-6689-4f9a-a424-5d7992fbe3dc-kube-api-access-zch9h\") pod \"coredns-7db6d8ff4d-hgpcc\" (UID: \"d05d2301-6689-4f9a-a424-5d7992fbe3dc\") " pod="kube-system/coredns-7db6d8ff4d-hgpcc" May 13 23:37:45.176763 kubelet[2663]: I0513 23:37:45.176683 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/378779b0-9a81-4e11-8cf3-b243afedec34-config-volume\") pod \"coredns-7db6d8ff4d-mn8n6\" (UID: \"378779b0-9a81-4e11-8cf3-b243afedec34\") " pod="kube-system/coredns-7db6d8ff4d-mn8n6" May 13 23:37:45.345903 containerd[1477]: time="2025-05-13T23:37:45.345406577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hgpcc,Uid:d05d2301-6689-4f9a-a424-5d7992fbe3dc,Namespace:kube-system,Attempt:0,}" May 13 23:37:45.352568 containerd[1477]: time="2025-05-13T23:37:45.352520229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mn8n6,Uid:378779b0-9a81-4e11-8cf3-b243afedec34,Namespace:kube-system,Attempt:0,}" May 13 23:37:45.755199 kubelet[2663]: I0513 23:37:45.755139 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-djxsf" podStartSLOduration=5.867107051 podStartE2EDuration="11.755124039s" podCreationTimestamp="2025-05-13 23:37:34 +0000 UTC" firstStartedPulling="2025-05-13 23:37:35.180565688 +0000 UTC m=+16.664142847" lastFinishedPulling="2025-05-13 23:37:41.068582676 +0000 UTC m=+22.552159835" observedRunningTime="2025-05-13 23:37:45.749727603 +0000 UTC m=+27.233304762" watchObservedRunningTime="2025-05-13 23:37:45.755124039 +0000 UTC m=+27.238701158" May 13 23:37:46.081844 systemd[1]: Started sshd@7-10.0.0.79:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). May 13 23:37:46.140023 sshd[3515]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:37:46.141436 sshd-session[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:37:46.145712 systemd-logind[1466]: New session 8 of user core. May 13 23:37:46.162000 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:37:46.296938 sshd[3517]: Connection closed by 10.0.0.1 port 60860 May 13 23:37:46.297930 sshd-session[3515]: pam_unix(sshd:session): session closed for user core May 13 23:37:46.310130 systemd[1]: sshd@7-10.0.0.79:22-10.0.0.1:60860.service: Deactivated successfully. May 13 23:37:46.313302 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:37:46.314014 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. May 13 23:37:46.314984 systemd-logind[1466]: Removed session 8. May 13 23:37:47.149969 systemd-networkd[1400]: cilium_host: Link UP May 13 23:37:47.150151 systemd-networkd[1400]: cilium_net: Link UP May 13 23:37:47.150154 systemd-networkd[1400]: cilium_net: Gained carrier May 13 23:37:47.150425 systemd-networkd[1400]: cilium_host: Gained carrier May 13 23:37:47.261138 systemd-networkd[1400]: cilium_vxlan: Link UP May 13 23:37:47.261146 systemd-networkd[1400]: cilium_vxlan: Gained carrier May 13 23:37:47.617845 kernel: NET: Registered PF_ALG protocol family May 13 23:37:47.625952 systemd-networkd[1400]: cilium_host: Gained IPv6LL May 13 23:37:48.144943 systemd-networkd[1400]: cilium_net: Gained IPv6LL May 13 23:37:48.282919 systemd-networkd[1400]: lxc_health: Link UP May 13 23:37:48.284614 systemd-networkd[1400]: cilium_vxlan: Gained IPv6LL May 13 23:37:48.287609 systemd-networkd[1400]: lxc_health: Gained carrier May 13 23:37:48.526190 systemd-networkd[1400]: lxc1f33ebae80a5: Link UP May 13 23:37:48.533847 kernel: eth0: renamed from tmpb05e6 May 13 23:37:48.555401 kernel: eth0: renamed from tmp4ac54 May 13 23:37:48.560792 systemd-networkd[1400]: lxc1f33ebae80a5: Gained carrier May 13 23:37:48.561283 systemd-networkd[1400]: lxc313b1ffe3943: Link UP May 13 23:37:48.562590 systemd-networkd[1400]: lxc313b1ffe3943: Gained carrier May 13 23:37:49.936941 systemd-networkd[1400]: lxc313b1ffe3943: Gained IPv6LL May 13 23:37:50.064946 systemd-networkd[1400]: lxc_health: Gained IPv6LL May 13 23:37:50.256960 systemd-networkd[1400]: lxc1f33ebae80a5: Gained IPv6LL May 13 23:37:51.314546 systemd[1]: Started sshd@8-10.0.0.79:22-10.0.0.1:60862.service - OpenSSH per-connection server daemon (10.0.0.1:60862). May 13 23:37:51.363960 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 60862 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:37:51.364838 sshd-session[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:37:51.369789 systemd-logind[1466]: New session 9 of user core. May 13 23:37:51.375981 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:37:51.511909 sshd[3913]: Connection closed by 10.0.0.1 port 60862 May 13 23:37:51.513159 sshd-session[3911]: pam_unix(sshd:session): session closed for user core May 13 23:37:51.517055 systemd[1]: sshd@8-10.0.0.79:22-10.0.0.1:60862.service: Deactivated successfully. May 13 23:37:51.520717 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:37:51.521855 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. May 13 23:37:51.522686 systemd-logind[1466]: Removed session 9. May 13 23:37:52.312547 containerd[1477]: time="2025-05-13T23:37:52.311904260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:52.312547 containerd[1477]: time="2025-05-13T23:37:52.311982067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:52.312547 containerd[1477]: time="2025-05-13T23:37:52.311998269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:52.314043 containerd[1477]: time="2025-05-13T23:37:52.312098479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:52.337660 containerd[1477]: time="2025-05-13T23:37:52.337234865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:37:52.337660 containerd[1477]: time="2025-05-13T23:37:52.337630703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:37:52.337660 containerd[1477]: time="2025-05-13T23:37:52.337642985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:52.337908 containerd[1477]: time="2025-05-13T23:37:52.337733754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:37:52.340002 systemd[1]: Started cri-containerd-4ac5408d6e54a0d88f558d5288975ff4644c54e4a3f2e536ca5a1b5e0f1e583b.scope - libcontainer container 4ac5408d6e54a0d88f558d5288975ff4644c54e4a3f2e536ca5a1b5e0f1e583b. May 13 23:37:52.357824 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:37:52.362969 systemd[1]: Started cri-containerd-b05e6facdc9d5e2d49c9492210426cec28b5a5422ec30149f39b8b5e5db19a13.scope - libcontainer container b05e6facdc9d5e2d49c9492210426cec28b5a5422ec30149f39b8b5e5db19a13. May 13 23:37:52.373449 systemd-resolved[1318]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:37:52.375419 containerd[1477]: time="2025-05-13T23:37:52.375381127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hgpcc,Uid:d05d2301-6689-4f9a-a424-5d7992fbe3dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ac5408d6e54a0d88f558d5288975ff4644c54e4a3f2e536ca5a1b5e0f1e583b\"" May 13 23:37:52.379343 containerd[1477]: time="2025-05-13T23:37:52.379314233Z" level=info msg="CreateContainer within sandbox \"4ac5408d6e54a0d88f558d5288975ff4644c54e4a3f2e536ca5a1b5e0f1e583b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:37:52.393166 containerd[1477]: time="2025-05-13T23:37:52.393071302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mn8n6,Uid:378779b0-9a81-4e11-8cf3-b243afedec34,Namespace:kube-system,Attempt:0,} returns sandbox id \"b05e6facdc9d5e2d49c9492210426cec28b5a5422ec30149f39b8b5e5db19a13\"" May 13 23:37:52.397378 containerd[1477]: time="2025-05-13T23:37:52.397348002Z" level=info msg="CreateContainer within sandbox \"b05e6facdc9d5e2d49c9492210426cec28b5a5422ec30149f39b8b5e5db19a13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:37:52.398123 containerd[1477]: time="2025-05-13T23:37:52.398076673Z" level=info msg="CreateContainer within sandbox \"4ac5408d6e54a0d88f558d5288975ff4644c54e4a3f2e536ca5a1b5e0f1e583b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11fa8f9291595b5891ac1ae21e360bfef64170fc19f2ef5d91e05a109a75ac0e\"" May 13 23:37:52.399086 containerd[1477]: time="2025-05-13T23:37:52.399050729Z" level=info msg="StartContainer for \"11fa8f9291595b5891ac1ae21e360bfef64170fc19f2ef5d91e05a109a75ac0e\"" May 13 23:37:52.407441 containerd[1477]: time="2025-05-13T23:37:52.407389427Z" level=info msg="CreateContainer within sandbox \"b05e6facdc9d5e2d49c9492210426cec28b5a5422ec30149f39b8b5e5db19a13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da99ed1b295f667f0e97fa20be6d1b63a7b49d4b79c81b6f6e1a393912d02e19\"" May 13 23:37:52.408651 containerd[1477]: time="2025-05-13T23:37:52.407900957Z" level=info msg="StartContainer for \"da99ed1b295f667f0e97fa20be6d1b63a7b49d4b79c81b6f6e1a393912d02e19\"" May 13 23:37:52.425961 systemd[1]: Started cri-containerd-11fa8f9291595b5891ac1ae21e360bfef64170fc19f2ef5d91e05a109a75ac0e.scope - libcontainer container 11fa8f9291595b5891ac1ae21e360bfef64170fc19f2ef5d91e05a109a75ac0e. May 13 23:37:52.428264 systemd[1]: Started cri-containerd-da99ed1b295f667f0e97fa20be6d1b63a7b49d4b79c81b6f6e1a393912d02e19.scope - libcontainer container da99ed1b295f667f0e97fa20be6d1b63a7b49d4b79c81b6f6e1a393912d02e19. May 13 23:37:52.456396 containerd[1477]: time="2025-05-13T23:37:52.456279063Z" level=info msg="StartContainer for \"11fa8f9291595b5891ac1ae21e360bfef64170fc19f2ef5d91e05a109a75ac0e\" returns successfully" May 13 23:37:52.472008 containerd[1477]: time="2025-05-13T23:37:52.471967042Z" level=info msg="StartContainer for \"da99ed1b295f667f0e97fa20be6d1b63a7b49d4b79c81b6f6e1a393912d02e19\" returns successfully" May 13 23:37:52.789832 kubelet[2663]: I0513 23:37:52.789729 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mn8n6" podStartSLOduration=18.789713935 podStartE2EDuration="18.789713935s" podCreationTimestamp="2025-05-13 23:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:52.788477334 +0000 UTC m=+34.272054493" watchObservedRunningTime="2025-05-13 23:37:52.789713935 +0000 UTC m=+34.273291094" May 13 23:37:52.818979 kubelet[2663]: I0513 23:37:52.818909 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hgpcc" podStartSLOduration=18.818893558 podStartE2EDuration="18.818893558s" podCreationTimestamp="2025-05-13 23:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:37:52.79965403 +0000 UTC m=+34.283231189" watchObservedRunningTime="2025-05-13 23:37:52.818893558 +0000 UTC m=+34.302470717" May 13 23:37:56.527740 systemd[1]: Started sshd@9-10.0.0.79:22-10.0.0.1:44380.service - OpenSSH per-connection server daemon (10.0.0.1:44380). May 13 23:37:56.608244 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 44380 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:37:56.609928 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:37:56.615142 systemd-logind[1466]: New session 10 of user core. May 13 23:37:56.623015 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:37:56.771726 sshd[4103]: Connection closed by 10.0.0.1 port 44380 May 13 23:37:56.772681 sshd-session[4101]: pam_unix(sshd:session): session closed for user core May 13 23:37:56.778140 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. May 13 23:37:56.778452 systemd[1]: sshd@9-10.0.0.79:22-10.0.0.1:44380.service: Deactivated successfully. May 13 23:37:56.782008 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:37:56.783220 systemd-logind[1466]: Removed session 10. May 13 23:38:01.789509 systemd[1]: Started sshd@10-10.0.0.79:22-10.0.0.1:44392.service - OpenSSH per-connection server daemon (10.0.0.1:44392). May 13 23:38:01.836649 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 44392 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:01.838472 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:01.844234 systemd-logind[1466]: New session 11 of user core. May 13 23:38:01.859036 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:38:02.007764 sshd[4120]: Connection closed by 10.0.0.1 port 44392 May 13 23:38:02.008736 sshd-session[4118]: pam_unix(sshd:session): session closed for user core May 13 23:38:02.026592 systemd[1]: sshd@10-10.0.0.79:22-10.0.0.1:44392.service: Deactivated successfully. May 13 23:38:02.028341 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:38:02.029167 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. May 13 23:38:02.040122 systemd[1]: Started sshd@11-10.0.0.79:22-10.0.0.1:44406.service - OpenSSH per-connection server daemon (10.0.0.1:44406). May 13 23:38:02.040771 systemd-logind[1466]: Removed session 11. May 13 23:38:02.084883 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 44406 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:02.086184 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:02.090874 systemd-logind[1466]: New session 12 of user core. May 13 23:38:02.097989 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:38:02.246832 sshd[4136]: Connection closed by 10.0.0.1 port 44406 May 13 23:38:02.247360 sshd-session[4133]: pam_unix(sshd:session): session closed for user core May 13 23:38:02.260476 systemd[1]: sshd@11-10.0.0.79:22-10.0.0.1:44406.service: Deactivated successfully. May 13 23:38:02.262827 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:38:02.263764 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. May 13 23:38:02.276103 systemd[1]: Started sshd@12-10.0.0.79:22-10.0.0.1:44412.service - OpenSSH per-connection server daemon (10.0.0.1:44412). May 13 23:38:02.277319 systemd-logind[1466]: Removed session 12. May 13 23:38:02.318027 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 44412 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:02.319071 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:02.323454 systemd-logind[1466]: New session 13 of user core. May 13 23:38:02.332985 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:38:02.444020 sshd[4149]: Connection closed by 10.0.0.1 port 44412 May 13 23:38:02.444408 sshd-session[4146]: pam_unix(sshd:session): session closed for user core May 13 23:38:02.448694 systemd[1]: sshd@12-10.0.0.79:22-10.0.0.1:44412.service: Deactivated successfully. May 13 23:38:02.450584 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:38:02.452123 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. May 13 23:38:02.453889 systemd-logind[1466]: Removed session 13. May 13 23:38:07.457070 systemd[1]: Started sshd@13-10.0.0.79:22-10.0.0.1:46028.service - OpenSSH per-connection server daemon (10.0.0.1:46028). May 13 23:38:07.501299 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 46028 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:07.502652 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:07.506156 systemd-logind[1466]: New session 14 of user core. May 13 23:38:07.515969 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:38:07.633538 sshd[4167]: Connection closed by 10.0.0.1 port 46028 May 13 23:38:07.633924 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 13 23:38:07.642607 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. May 13 23:38:07.643085 systemd[1]: sshd@13-10.0.0.79:22-10.0.0.1:46028.service: Deactivated successfully. May 13 23:38:07.648329 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:38:07.652938 systemd-logind[1466]: Removed session 14. May 13 23:38:12.650242 systemd[1]: Started sshd@14-10.0.0.79:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). May 13 23:38:12.694581 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:12.695840 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:12.700669 systemd-logind[1466]: New session 15 of user core. May 13 23:38:12.708011 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:38:12.821323 sshd[4182]: Connection closed by 10.0.0.1 port 57208 May 13 23:38:12.821715 sshd-session[4180]: pam_unix(sshd:session): session closed for user core May 13 23:38:12.833052 systemd[1]: sshd@14-10.0.0.79:22-10.0.0.1:57208.service: Deactivated successfully. May 13 23:38:12.834840 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:38:12.838530 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. May 13 23:38:12.845071 systemd[1]: Started sshd@15-10.0.0.79:22-10.0.0.1:57216.service - OpenSSH per-connection server daemon (10.0.0.1:57216). May 13 23:38:12.846250 systemd-logind[1466]: Removed session 15. May 13 23:38:12.886655 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 57216 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:12.888080 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:12.892997 systemd-logind[1466]: New session 16 of user core. May 13 23:38:12.902992 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:38:13.158639 sshd[4198]: Connection closed by 10.0.0.1 port 57216 May 13 23:38:13.159179 sshd-session[4195]: pam_unix(sshd:session): session closed for user core May 13 23:38:13.176208 systemd[1]: sshd@15-10.0.0.79:22-10.0.0.1:57216.service: Deactivated successfully. May 13 23:38:13.179687 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:38:13.180687 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. May 13 23:38:13.190092 systemd[1]: Started sshd@16-10.0.0.79:22-10.0.0.1:57222.service - OpenSSH per-connection server daemon (10.0.0.1:57222). May 13 23:38:13.191066 systemd-logind[1466]: Removed session 16. May 13 23:38:13.239032 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 57222 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:13.240467 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:13.244756 systemd-logind[1466]: New session 17 of user core. May 13 23:38:13.250990 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:38:14.574337 sshd[4211]: Connection closed by 10.0.0.1 port 57222 May 13 23:38:14.576612 sshd-session[4208]: pam_unix(sshd:session): session closed for user core May 13 23:38:14.586262 systemd[1]: sshd@16-10.0.0.79:22-10.0.0.1:57222.service: Deactivated successfully. May 13 23:38:14.589626 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:38:14.593155 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. May 13 23:38:14.603183 systemd[1]: Started sshd@17-10.0.0.79:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). May 13 23:38:14.604342 systemd-logind[1466]: Removed session 17. May 13 23:38:14.646064 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:14.647720 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:14.655293 systemd-logind[1466]: New session 18 of user core. May 13 23:38:14.663057 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:38:14.896480 sshd[4235]: Connection closed by 10.0.0.1 port 57224 May 13 23:38:14.895814 sshd-session[4232]: pam_unix(sshd:session): session closed for user core May 13 23:38:14.910761 systemd[1]: sshd@17-10.0.0.79:22-10.0.0.1:57224.service: Deactivated successfully. May 13 23:38:14.913348 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:38:14.915433 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. May 13 23:38:14.917226 systemd-logind[1466]: Removed session 18. May 13 23:38:14.929114 systemd[1]: Started sshd@18-10.0.0.79:22-10.0.0.1:57228.service - OpenSSH per-connection server daemon (10.0.0.1:57228). May 13 23:38:14.973349 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 57228 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:14.974617 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:14.979150 systemd-logind[1466]: New session 19 of user core. May 13 23:38:14.984048 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:38:15.091441 sshd[4249]: Connection closed by 10.0.0.1 port 57228 May 13 23:38:15.091843 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 13 23:38:15.095291 systemd[1]: sshd@18-10.0.0.79:22-10.0.0.1:57228.service: Deactivated successfully. May 13 23:38:15.097196 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:38:15.097896 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. May 13 23:38:15.098675 systemd-logind[1466]: Removed session 19. May 13 23:38:20.103172 systemd[1]: Started sshd@19-10.0.0.79:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). May 13 23:38:20.147088 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:20.148347 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:20.152352 systemd-logind[1466]: New session 20 of user core. May 13 23:38:20.170042 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:38:20.276329 sshd[4269]: Connection closed by 10.0.0.1 port 57242 May 13 23:38:20.276684 sshd-session[4267]: pam_unix(sshd:session): session closed for user core May 13 23:38:20.279908 systemd[1]: sshd@19-10.0.0.79:22-10.0.0.1:57242.service: Deactivated successfully. May 13 23:38:20.282025 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:38:20.282776 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. May 13 23:38:20.283739 systemd-logind[1466]: Removed session 20. May 13 23:38:25.289152 systemd[1]: Started sshd@20-10.0.0.79:22-10.0.0.1:58998.service - OpenSSH per-connection server daemon (10.0.0.1:58998). May 13 23:38:25.344563 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 58998 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:25.347773 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:25.355582 systemd-logind[1466]: New session 21 of user core. May 13 23:38:25.366973 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:38:25.502184 sshd[4285]: Connection closed by 10.0.0.1 port 58998 May 13 23:38:25.502533 sshd-session[4283]: pam_unix(sshd:session): session closed for user core May 13 23:38:25.506152 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. May 13 23:38:25.506452 systemd[1]: sshd@20-10.0.0.79:22-10.0.0.1:58998.service: Deactivated successfully. May 13 23:38:25.509240 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:38:25.510180 systemd-logind[1466]: Removed session 21. May 13 23:38:30.514710 systemd[1]: Started sshd@21-10.0.0.79:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). May 13 23:38:30.559521 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:30.560853 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:30.565388 systemd-logind[1466]: New session 22 of user core. May 13 23:38:30.576962 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:38:30.692746 sshd[4300]: Connection closed by 10.0.0.1 port 59002 May 13 23:38:30.693455 sshd-session[4298]: pam_unix(sshd:session): session closed for user core May 13 23:38:30.705201 systemd[1]: sshd@21-10.0.0.79:22-10.0.0.1:59002.service: Deactivated successfully. May 13 23:38:30.706926 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:38:30.708847 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. May 13 23:38:30.716457 systemd[1]: Started sshd@22-10.0.0.79:22-10.0.0.1:59004.service - OpenSSH per-connection server daemon (10.0.0.1:59004). May 13 23:38:30.717434 systemd-logind[1466]: Removed session 22. May 13 23:38:30.758106 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 59004 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:30.759497 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:30.766438 systemd-logind[1466]: New session 23 of user core. May 13 23:38:30.778852 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 23:38:32.708420 containerd[1477]: time="2025-05-13T23:38:32.707243187Z" level=info msg="StopContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" with timeout 30 (s)" May 13 23:38:32.709316 containerd[1477]: time="2025-05-13T23:38:32.709220081Z" level=info msg="Stop container \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" with signal terminated" May 13 23:38:32.723617 systemd[1]: cri-containerd-c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3.scope: Deactivated successfully. May 13 23:38:32.738863 systemd[1]: run-containerd-runc-k8s.io-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b-runc.bNIvfB.mount: Deactivated successfully. May 13 23:38:32.754828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3-rootfs.mount: Deactivated successfully. May 13 23:38:32.764109 containerd[1477]: time="2025-05-13T23:38:32.764071029Z" level=info msg="StopContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" with timeout 2 (s)" May 13 23:38:32.764703 containerd[1477]: time="2025-05-13T23:38:32.764487200Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:38:32.764703 containerd[1477]: time="2025-05-13T23:38:32.764559842Z" level=info msg="Stop container \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" with signal terminated" May 13 23:38:32.766193 containerd[1477]: time="2025-05-13T23:38:32.766142926Z" level=info msg="shim disconnected" id=c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3 namespace=k8s.io May 13 23:38:32.766193 containerd[1477]: time="2025-05-13T23:38:32.766190447Z" level=warning msg="cleaning up after shim disconnected" id=c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3 namespace=k8s.io May 13 23:38:32.766193 containerd[1477]: time="2025-05-13T23:38:32.766199367Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:32.771069 systemd-networkd[1400]: lxc_health: Link DOWN May 13 23:38:32.771074 systemd-networkd[1400]: lxc_health: Lost carrier May 13 23:38:32.791638 systemd[1]: cri-containerd-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b.scope: Deactivated successfully. May 13 23:38:32.791962 systemd[1]: cri-containerd-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b.scope: Consumed 6.957s CPU time, 126.4M memory peak, 212K read from disk, 12.9M written to disk. May 13 23:38:32.810451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b-rootfs.mount: Deactivated successfully. May 13 23:38:32.814664 containerd[1477]: time="2025-05-13T23:38:32.814602617Z" level=info msg="StopContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" returns successfully" May 13 23:38:32.815524 containerd[1477]: time="2025-05-13T23:38:32.815335157Z" level=info msg="shim disconnected" id=bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b namespace=k8s.io May 13 23:38:32.815620 containerd[1477]: time="2025-05-13T23:38:32.815567764Z" level=warning msg="cleaning up after shim disconnected" id=bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b namespace=k8s.io May 13 23:38:32.815620 containerd[1477]: time="2025-05-13T23:38:32.815582724Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:32.815949 containerd[1477]: time="2025-05-13T23:38:32.815876172Z" level=info msg="StopPodSandbox for \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\"" May 13 23:38:32.816011 containerd[1477]: time="2025-05-13T23:38:32.815960415Z" level=info msg="Container to stop \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.817776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741-shm.mount: Deactivated successfully. May 13 23:38:32.823490 systemd[1]: cri-containerd-e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741.scope: Deactivated successfully. May 13 23:38:32.838239 containerd[1477]: time="2025-05-13T23:38:32.838188066Z" level=info msg="StopContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" returns successfully" May 13 23:38:32.838813 containerd[1477]: time="2025-05-13T23:38:32.838769521Z" level=info msg="StopPodSandbox for \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\"" May 13 23:38:32.839023 containerd[1477]: time="2025-05-13T23:38:32.838945246Z" level=info msg="Container to stop \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.839023 containerd[1477]: time="2025-05-13T23:38:32.838965687Z" level=info msg="Container to stop \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.839023 containerd[1477]: time="2025-05-13T23:38:32.838975847Z" level=info msg="Container to stop \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.839023 containerd[1477]: time="2025-05-13T23:38:32.838985087Z" level=info msg="Container to stop \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.839023 containerd[1477]: time="2025-05-13T23:38:32.838993688Z" level=info msg="Container to stop \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 23:38:32.844636 systemd[1]: cri-containerd-0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb.scope: Deactivated successfully. May 13 23:38:32.869215 containerd[1477]: time="2025-05-13T23:38:32.869131836Z" level=info msg="shim disconnected" id=0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb namespace=k8s.io May 13 23:38:32.869215 containerd[1477]: time="2025-05-13T23:38:32.869195718Z" level=warning msg="cleaning up after shim disconnected" id=0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb namespace=k8s.io May 13 23:38:32.869215 containerd[1477]: time="2025-05-13T23:38:32.869204998Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:32.870473 containerd[1477]: time="2025-05-13T23:38:32.869154677Z" level=info msg="shim disconnected" id=e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741 namespace=k8s.io May 13 23:38:32.870473 containerd[1477]: time="2025-05-13T23:38:32.870011860Z" level=warning msg="cleaning up after shim disconnected" id=e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741 namespace=k8s.io May 13 23:38:32.870473 containerd[1477]: time="2025-05-13T23:38:32.870020300Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:32.883740 containerd[1477]: time="2025-05-13T23:38:32.883687676Z" level=info msg="TearDown network for sandbox \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" successfully" May 13 23:38:32.883740 containerd[1477]: time="2025-05-13T23:38:32.883721277Z" level=info msg="StopPodSandbox for \"0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb\" returns successfully" May 13 23:38:32.893624 containerd[1477]: time="2025-05-13T23:38:32.893559427Z" level=info msg="TearDown network for sandbox \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\" successfully" May 13 23:38:32.893624 containerd[1477]: time="2025-05-13T23:38:32.893616229Z" level=info msg="StopPodSandbox for \"e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741\" returns successfully" May 13 23:38:33.073991 kubelet[2663]: I0513 23:38:33.073920 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71801b2a-dfcb-4f3f-b674-faa86afb2f51-clustermesh-secrets\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.073991 kubelet[2663]: I0513 23:38:33.073990 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-config-path\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074009 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-lib-modules\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074028 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hubble-tls\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074043 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-net\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074057 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hostproc\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074073 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-bpf-maps\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074405 kubelet[2663]: I0513 23:38:33.074088 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlxj\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-kube-api-access-5xlxj\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074103 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-run\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074118 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cni-path\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074134 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-kernel\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074151 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-xtables-lock\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074165 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-etc-cni-netd\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074597 kubelet[2663]: I0513 23:38:33.074180 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-cgroup\") pod \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\" (UID: \"71801b2a-dfcb-4f3f-b674-faa86afb2f51\") " May 13 23:38:33.074720 kubelet[2663]: I0513 23:38:33.074199 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55bl4\" (UniqueName: \"kubernetes.io/projected/b2c3947a-49a2-445d-8ed7-8eafae13a043-kube-api-access-55bl4\") pod \"b2c3947a-49a2-445d-8ed7-8eafae13a043\" (UID: \"b2c3947a-49a2-445d-8ed7-8eafae13a043\") " May 13 23:38:33.074720 kubelet[2663]: I0513 23:38:33.074215 2663 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c3947a-49a2-445d-8ed7-8eafae13a043-cilium-config-path\") pod \"b2c3947a-49a2-445d-8ed7-8eafae13a043\" (UID: \"b2c3947a-49a2-445d-8ed7-8eafae13a043\") " May 13 23:38:33.080935 kubelet[2663]: I0513 23:38:33.080891 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c3947a-49a2-445d-8ed7-8eafae13a043-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2c3947a-49a2-445d-8ed7-8eafae13a043" (UID: "b2c3947a-49a2-445d-8ed7-8eafae13a043"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:38:33.082481 kubelet[2663]: I0513 23:38:33.081637 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-kube-api-access-5xlxj" (OuterVolumeSpecName: "kube-api-access-5xlxj") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "kube-api-access-5xlxj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:38:33.082481 kubelet[2663]: I0513 23:38:33.081660 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 23:38:33.082481 kubelet[2663]: I0513 23:38:33.081703 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082481 kubelet[2663]: I0513 23:38:33.081705 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082481 kubelet[2663]: I0513 23:38:33.081725 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hostproc" (OuterVolumeSpecName: "hostproc") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082666 kubelet[2663]: I0513 23:38:33.081746 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082666 kubelet[2663]: I0513 23:38:33.081765 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cni-path" (OuterVolumeSpecName: "cni-path") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082666 kubelet[2663]: I0513 23:38:33.081768 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082666 kubelet[2663]: I0513 23:38:33.081782 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082666 kubelet[2663]: I0513 23:38:33.081791 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082783 kubelet[2663]: I0513 23:38:33.081825 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.082783 kubelet[2663]: I0513 23:38:33.081830 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 23:38:33.083437 kubelet[2663]: I0513 23:38:33.083411 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:38:33.083817 kubelet[2663]: I0513 23:38:33.083774 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71801b2a-dfcb-4f3f-b674-faa86afb2f51-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "71801b2a-dfcb-4f3f-b674-faa86afb2f51" (UID: "71801b2a-dfcb-4f3f-b674-faa86afb2f51"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 23:38:33.084176 kubelet[2663]: I0513 23:38:33.084131 2663 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c3947a-49a2-445d-8ed7-8eafae13a043-kube-api-access-55bl4" (OuterVolumeSpecName: "kube-api-access-55bl4") pod "b2c3947a-49a2-445d-8ed7-8eafae13a043" (UID: "b2c3947a-49a2-445d-8ed7-8eafae13a043"). InnerVolumeSpecName "kube-api-access-55bl4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175316 2663 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175356 2663 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5xlxj\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-kube-api-access-5xlxj\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175368 2663 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175377 2663 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175387 2663 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175394 2663 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175402 2663 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-55bl4\" (UniqueName: \"kubernetes.io/projected/b2c3947a-49a2-445d-8ed7-8eafae13a043-kube-api-access-55bl4\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176619 kubelet[2663]: I0513 23:38:33.175411 2663 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2c3947a-49a2-445d-8ed7-8eafae13a043-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175418 2663 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175425 2663 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175437 2663 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71801b2a-dfcb-4f3f-b674-faa86afb2f51-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175445 2663 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71801b2a-dfcb-4f3f-b674-faa86afb2f51-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175452 2663 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175459 2663 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175466 2663 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.176945 kubelet[2663]: I0513 23:38:33.175473 2663 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71801b2a-dfcb-4f3f-b674-faa86afb2f51-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 23:38:33.656428 kubelet[2663]: E0513 23:38:33.656374 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:38:33.735968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e62d9b91f44482180ee66de3b435c61d805654c128444ea0e9d35e2b95a12741-rootfs.mount: Deactivated successfully. May 13 23:38:33.736072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb-rootfs.mount: Deactivated successfully. May 13 23:38:33.736127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0b98910af45450a110aa2e12c148c3c98e84915cd950d6c6d46e3fc8e81ab6cb-shm.mount: Deactivated successfully. May 13 23:38:33.736180 systemd[1]: var-lib-kubelet-pods-b2c3947a\x2d49a2\x2d445d\x2d8ed7\x2d8eafae13a043-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55bl4.mount: Deactivated successfully. May 13 23:38:33.736235 systemd[1]: var-lib-kubelet-pods-71801b2a\x2ddfcb\x2d4f3f\x2db674\x2dfaa86afb2f51-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xlxj.mount: Deactivated successfully. May 13 23:38:33.736285 systemd[1]: var-lib-kubelet-pods-71801b2a\x2ddfcb\x2d4f3f\x2db674\x2dfaa86afb2f51-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 23:38:33.736334 systemd[1]: var-lib-kubelet-pods-71801b2a\x2ddfcb\x2d4f3f\x2db674\x2dfaa86afb2f51-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 23:38:33.885369 kubelet[2663]: I0513 23:38:33.885254 2663 scope.go:117] "RemoveContainer" containerID="bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b" May 13 23:38:33.887002 containerd[1477]: time="2025-05-13T23:38:33.886966151Z" level=info msg="RemoveContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\"" May 13 23:38:33.893176 containerd[1477]: time="2025-05-13T23:38:33.893143004Z" level=info msg="RemoveContainer for \"bbb46da4fadb8b89fd42dda70b6fcf6a987c29146d07f7f84cb94e1186d0686b\" returns successfully" May 13 23:38:33.893893 systemd[1]: Removed slice kubepods-burstable-pod71801b2a_dfcb_4f3f_b674_faa86afb2f51.slice - libcontainer container kubepods-burstable-pod71801b2a_dfcb_4f3f_b674_faa86afb2f51.slice. May 13 23:38:33.893998 systemd[1]: kubepods-burstable-pod71801b2a_dfcb_4f3f_b674_faa86afb2f51.slice: Consumed 7.083s CPU time, 126.7M memory peak, 228K read from disk, 12.9M written to disk. May 13 23:38:33.895502 systemd[1]: Removed slice kubepods-besteffort-podb2c3947a_49a2_445d_8ed7_8eafae13a043.slice - libcontainer container kubepods-besteffort-podb2c3947a_49a2_445d_8ed7_8eafae13a043.slice. May 13 23:38:33.896374 kubelet[2663]: I0513 23:38:33.896354 2663 scope.go:117] "RemoveContainer" containerID="9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15" May 13 23:38:33.898095 containerd[1477]: time="2025-05-13T23:38:33.898060623Z" level=info msg="RemoveContainer for \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\"" May 13 23:38:33.906157 containerd[1477]: time="2025-05-13T23:38:33.906111129Z" level=info msg="RemoveContainer for \"9d833f6dfa4b241f4d1749ec182d1953c1c42a8883ebb0cbadb0fb1dd93b9f15\" returns successfully" May 13 23:38:33.906857 kubelet[2663]: I0513 23:38:33.906759 2663 scope.go:117] "RemoveContainer" containerID="9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5" May 13 23:38:33.909968 containerd[1477]: time="2025-05-13T23:38:33.909927316Z" level=info msg="RemoveContainer for \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\"" May 13 23:38:33.916420 containerd[1477]: time="2025-05-13T23:38:33.916384217Z" level=info msg="RemoveContainer for \"9ccb14724afd3dcc6225f28a3b379951c8cfbdb554137ce1ffc7058ec841dde5\" returns successfully" May 13 23:38:33.916657 kubelet[2663]: I0513 23:38:33.916635 2663 scope.go:117] "RemoveContainer" containerID="5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f" May 13 23:38:33.923932 containerd[1477]: time="2025-05-13T23:38:33.923889988Z" level=info msg="RemoveContainer for \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\"" May 13 23:38:33.926276 containerd[1477]: time="2025-05-13T23:38:33.926238534Z" level=info msg="RemoveContainer for \"5f0c19be011b51ce1b983a0e9f2e6c5489a31c00223f28699977a1090f41c91f\" returns successfully" May 13 23:38:33.926472 kubelet[2663]: I0513 23:38:33.926424 2663 scope.go:117] "RemoveContainer" containerID="19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82" May 13 23:38:33.927508 containerd[1477]: time="2025-05-13T23:38:33.927461889Z" level=info msg="RemoveContainer for \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\"" May 13 23:38:33.929433 containerd[1477]: time="2025-05-13T23:38:33.929406663Z" level=info msg="RemoveContainer for \"19525099a8b250ec0eba71d46d1c5919f06d5cd2a227ecc8c38cb04c51152c82\" returns successfully" May 13 23:38:33.929592 kubelet[2663]: I0513 23:38:33.929573 2663 scope.go:117] "RemoveContainer" containerID="c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3" May 13 23:38:33.930581 containerd[1477]: time="2025-05-13T23:38:33.930550215Z" level=info msg="RemoveContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\"" May 13 23:38:33.932548 containerd[1477]: time="2025-05-13T23:38:33.932521111Z" level=info msg="RemoveContainer for \"c212a5820a96d18a8830866dd2e7914c29ebf2f99e2298c05ffbee40ef81f5c3\" returns successfully" May 13 23:38:34.604372 kubelet[2663]: I0513 23:38:34.604301 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" path="/var/lib/kubelet/pods/71801b2a-dfcb-4f3f-b674-faa86afb2f51/volumes" May 13 23:38:34.604919 kubelet[2663]: I0513 23:38:34.604876 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c3947a-49a2-445d-8ed7-8eafae13a043" path="/var/lib/kubelet/pods/b2c3947a-49a2-445d-8ed7-8eafae13a043/volumes" May 13 23:38:34.659554 sshd[4315]: Connection closed by 10.0.0.1 port 59004 May 13 23:38:34.660023 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 13 23:38:34.672043 systemd[1]: sshd@22-10.0.0.79:22-10.0.0.1:59004.service: Deactivated successfully. May 13 23:38:34.673727 systemd[1]: session-23.scope: Deactivated successfully. May 13 23:38:34.674032 systemd[1]: session-23.scope: Consumed 1.245s CPU time, 24.8M memory peak. May 13 23:38:34.675274 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. May 13 23:38:34.677116 systemd[1]: Started sshd@23-10.0.0.79:22-10.0.0.1:41658.service - OpenSSH per-connection server daemon (10.0.0.1:41658). May 13 23:38:34.677872 systemd-logind[1466]: Removed session 23. May 13 23:38:34.720837 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 41658 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:34.722180 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:34.727605 systemd-logind[1466]: New session 24 of user core. May 13 23:38:34.737080 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 23:38:35.984849 sshd[4480]: Connection closed by 10.0.0.1 port 41658 May 13 23:38:35.985105 sshd-session[4477]: pam_unix(sshd:session): session closed for user core May 13 23:38:35.998818 kubelet[2663]: I0513 23:38:35.997443 2663 topology_manager.go:215] "Topology Admit Handler" podUID="adaf4c9d-abc1-4958-bb5d-6c7e4bce3913" podNamespace="kube-system" podName="cilium-7rbsk" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997586 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="mount-cgroup" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997597 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="apply-sysctl-overwrites" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997605 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="mount-bpf-fs" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997610 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="clean-cilium-state" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997616 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="cilium-agent" May 13 23:38:35.998818 kubelet[2663]: E0513 23:38:35.997622 2663 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2c3947a-49a2-445d-8ed7-8eafae13a043" containerName="cilium-operator" May 13 23:38:35.998818 kubelet[2663]: I0513 23:38:35.997643 2663 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c3947a-49a2-445d-8ed7-8eafae13a043" containerName="cilium-operator" May 13 23:38:35.998818 kubelet[2663]: I0513 23:38:35.997650 2663 memory_manager.go:354] "RemoveStaleState removing state" podUID="71801b2a-dfcb-4f3f-b674-faa86afb2f51" containerName="cilium-agent" May 13 23:38:36.001193 systemd[1]: sshd@23-10.0.0.79:22-10.0.0.1:41658.service: Deactivated successfully. May 13 23:38:36.005992 systemd[1]: session-24.scope: Deactivated successfully. May 13 23:38:36.006389 systemd[1]: session-24.scope: Consumed 1.157s CPU time, 24.3M memory peak. May 13 23:38:36.007398 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. May 13 23:38:36.012357 systemd-logind[1466]: Removed session 24. May 13 23:38:36.022736 systemd[1]: Started sshd@24-10.0.0.79:22-10.0.0.1:41660.service - OpenSSH per-connection server daemon (10.0.0.1:41660). May 13 23:38:36.031457 systemd[1]: Created slice kubepods-burstable-podadaf4c9d_abc1_4958_bb5d_6c7e4bce3913.slice - libcontainer container kubepods-burstable-podadaf4c9d_abc1_4958_bb5d_6c7e4bce3913.slice. May 13 23:38:36.078165 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 41660 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:36.079695 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:36.084857 systemd-logind[1466]: New session 25 of user core. May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089130 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-lib-modules\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089172 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-xtables-lock\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089194 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h47mc\" (UniqueName: \"kubernetes.io/projected/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-kube-api-access-h47mc\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089212 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-cilium-cgroup\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089229 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-host-proc-sys-kernel\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089434 kubelet[2663]: I0513 23:38:36.089243 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-cilium-run\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089258 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-hubble-tls\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089274 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-bpf-maps\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089289 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-cni-path\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089304 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-clustermesh-secrets\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089320 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-cilium-config-path\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089714 kubelet[2663]: I0513 23:38:36.089337 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-hostproc\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089874 kubelet[2663]: I0513 23:38:36.089356 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-etc-cni-netd\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089874 kubelet[2663]: I0513 23:38:36.089407 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-cilium-ipsec-secrets\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.089874 kubelet[2663]: I0513 23:38:36.089436 2663 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adaf4c9d-abc1-4958-bb5d-6c7e4bce3913-host-proc-sys-net\") pod \"cilium-7rbsk\" (UID: \"adaf4c9d-abc1-4958-bb5d-6c7e4bce3913\") " pod="kube-system/cilium-7rbsk" May 13 23:38:36.090098 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 23:38:36.143574 sshd[4495]: Connection closed by 10.0.0.1 port 41660 May 13 23:38:36.145112 sshd-session[4493]: pam_unix(sshd:session): session closed for user core May 13 23:38:36.153129 systemd[1]: sshd@24-10.0.0.79:22-10.0.0.1:41660.service: Deactivated successfully. May 13 23:38:36.154763 systemd[1]: session-25.scope: Deactivated successfully. May 13 23:38:36.157347 systemd-logind[1466]: Session 25 logged out. Waiting for processes to exit. May 13 23:38:36.164124 systemd[1]: Started sshd@25-10.0.0.79:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). May 13 23:38:36.165020 systemd-logind[1466]: Removed session 25. May 13 23:38:36.205365 sshd[4501]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:38:36.206716 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:38:36.211962 systemd-logind[1466]: New session 26 of user core. May 13 23:38:36.227957 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 23:38:36.337448 containerd[1477]: time="2025-05-13T23:38:36.336695601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7rbsk,Uid:adaf4c9d-abc1-4958-bb5d-6c7e4bce3913,Namespace:kube-system,Attempt:0,}" May 13 23:38:36.358296 containerd[1477]: time="2025-05-13T23:38:36.358210243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:38:36.358296 containerd[1477]: time="2025-05-13T23:38:36.358267004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:38:36.358296 containerd[1477]: time="2025-05-13T23:38:36.358279125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:38:36.358478 containerd[1477]: time="2025-05-13T23:38:36.358361607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:38:36.385029 systemd[1]: Started cri-containerd-2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b.scope - libcontainer container 2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b. May 13 23:38:36.406129 containerd[1477]: time="2025-05-13T23:38:36.405691659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7rbsk,Uid:adaf4c9d-abc1-4958-bb5d-6c7e4bce3913,Namespace:kube-system,Attempt:0,} returns sandbox id \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\"" May 13 23:38:36.408600 containerd[1477]: time="2025-05-13T23:38:36.408570624Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 23:38:36.417933 containerd[1477]: time="2025-05-13T23:38:36.417864622Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283\"" May 13 23:38:36.418495 containerd[1477]: time="2025-05-13T23:38:36.418458759Z" level=info msg="StartContainer for \"00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283\"" May 13 23:38:36.444094 systemd[1]: Started cri-containerd-00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283.scope - libcontainer container 00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283. May 13 23:38:36.465159 containerd[1477]: time="2025-05-13T23:38:36.465031668Z" level=info msg="StartContainer for \"00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283\" returns successfully" May 13 23:38:36.486531 systemd[1]: cri-containerd-00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283.scope: Deactivated successfully. May 13 23:38:36.514964 containerd[1477]: time="2025-05-13T23:38:36.514902636Z" level=info msg="shim disconnected" id=00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283 namespace=k8s.io May 13 23:38:36.514964 containerd[1477]: time="2025-05-13T23:38:36.514959877Z" level=warning msg="cleaning up after shim disconnected" id=00aeb98e2f84f50d9cabbd72d1a74bffaaf96cf32e74737b800ee157f7963283 namespace=k8s.io May 13 23:38:36.514964 containerd[1477]: time="2025-05-13T23:38:36.514968958Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:36.525044 containerd[1477]: time="2025-05-13T23:38:36.524972816Z" level=warning msg="cleanup warnings time=\"2025-05-13T23:38:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 23:38:36.914899 containerd[1477]: time="2025-05-13T23:38:36.914643837Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 23:38:36.924575 containerd[1477]: time="2025-05-13T23:38:36.924499451Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d\"" May 13 23:38:36.925299 containerd[1477]: time="2025-05-13T23:38:36.925266154Z" level=info msg="StartContainer for \"dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d\"" May 13 23:38:36.958044 systemd[1]: Started cri-containerd-dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d.scope - libcontainer container dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d. May 13 23:38:36.983386 containerd[1477]: time="2025-05-13T23:38:36.983340166Z" level=info msg="StartContainer for \"dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d\" returns successfully" May 13 23:38:36.991338 systemd[1]: cri-containerd-dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d.scope: Deactivated successfully. May 13 23:38:37.020262 containerd[1477]: time="2025-05-13T23:38:37.020201275Z" level=info msg="shim disconnected" id=dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d namespace=k8s.io May 13 23:38:37.020560 containerd[1477]: time="2025-05-13T23:38:37.020465683Z" level=warning msg="cleaning up after shim disconnected" id=dc348c6f267bdec6ebc20c549848c18f7e8cc913a9bd77c1d99c9acc7e66e70d namespace=k8s.io May 13 23:38:37.020560 containerd[1477]: time="2025-05-13T23:38:37.020482244Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:37.903135 containerd[1477]: time="2025-05-13T23:38:37.903092564Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 23:38:37.918855 containerd[1477]: time="2025-05-13T23:38:37.918695277Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315\"" May 13 23:38:37.919275 containerd[1477]: time="2025-05-13T23:38:37.919219613Z" level=info msg="StartContainer for \"400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315\"" May 13 23:38:37.948989 systemd[1]: Started cri-containerd-400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315.scope - libcontainer container 400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315. May 13 23:38:37.978447 containerd[1477]: time="2025-05-13T23:38:37.978399250Z" level=info msg="StartContainer for \"400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315\" returns successfully" May 13 23:38:37.980063 systemd[1]: cri-containerd-400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315.scope: Deactivated successfully. May 13 23:38:38.010536 containerd[1477]: time="2025-05-13T23:38:38.010468509Z" level=info msg="shim disconnected" id=400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315 namespace=k8s.io May 13 23:38:38.010788 containerd[1477]: time="2025-05-13T23:38:38.010770558Z" level=warning msg="cleaning up after shim disconnected" id=400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315 namespace=k8s.io May 13 23:38:38.010887 containerd[1477]: time="2025-05-13T23:38:38.010871081Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:38.194253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-400cd09f22cb401cd4922c41cd5ae4093317f1aee2767f39489da0f3cf55c315-rootfs.mount: Deactivated successfully. May 13 23:38:38.657443 kubelet[2663]: E0513 23:38:38.657280 2663 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 23:38:38.908056 containerd[1477]: time="2025-05-13T23:38:38.907949310Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 23:38:38.929975 containerd[1477]: time="2025-05-13T23:38:38.929826426Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0\"" May 13 23:38:38.931874 containerd[1477]: time="2025-05-13T23:38:38.930686332Z" level=info msg="StartContainer for \"5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0\"" May 13 23:38:38.961997 systemd[1]: Started cri-containerd-5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0.scope - libcontainer container 5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0. May 13 23:38:38.985536 systemd[1]: cri-containerd-5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0.scope: Deactivated successfully. May 13 23:38:38.988378 containerd[1477]: time="2025-05-13T23:38:38.988267231Z" level=info msg="StartContainer for \"5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0\" returns successfully" May 13 23:38:39.011217 containerd[1477]: time="2025-05-13T23:38:39.011141342Z" level=info msg="shim disconnected" id=5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0 namespace=k8s.io May 13 23:38:39.011217 containerd[1477]: time="2025-05-13T23:38:39.011200064Z" level=warning msg="cleaning up after shim disconnected" id=5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0 namespace=k8s.io May 13 23:38:39.011217 containerd[1477]: time="2025-05-13T23:38:39.011208544Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:38:39.194374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d0b87674e3685185bb37426ca12f8f491cad344f1d814f70532b2f2b17ed2d0-rootfs.mount: Deactivated successfully. May 13 23:38:39.912778 containerd[1477]: time="2025-05-13T23:38:39.912626644Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 23:38:39.928359 kubelet[2663]: I0513 23:38:39.928305 2663 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T23:38:39Z","lastTransitionTime":"2025-05-13T23:38:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 23:38:39.933157 containerd[1477]: time="2025-05-13T23:38:39.933103647Z" level=info msg="CreateContainer within sandbox \"2678410677acaf85ddf5820b371bf1fa9e7a852a3de037e2a2ef647ef36c7b7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7621cde87ca3ee21befd4f7b2bc9f032d40cdb494e03887fbba1ec906ebc1712\"" May 13 23:38:39.934023 containerd[1477]: time="2025-05-13T23:38:39.933984995Z" level=info msg="StartContainer for \"7621cde87ca3ee21befd4f7b2bc9f032d40cdb494e03887fbba1ec906ebc1712\"" May 13 23:38:39.963981 systemd[1]: Started cri-containerd-7621cde87ca3ee21befd4f7b2bc9f032d40cdb494e03887fbba1ec906ebc1712.scope - libcontainer container 7621cde87ca3ee21befd4f7b2bc9f032d40cdb494e03887fbba1ec906ebc1712. May 13 23:38:39.993745 containerd[1477]: time="2025-05-13T23:38:39.993546465Z" level=info msg="StartContainer for \"7621cde87ca3ee21befd4f7b2bc9f032d40cdb494e03887fbba1ec906ebc1712\" returns successfully" May 13 23:38:40.312824 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 23:38:43.197262 systemd-networkd[1400]: lxc_health: Link UP May 13 23:38:43.207426 systemd-networkd[1400]: lxc_health: Gained carrier May 13 23:38:44.358131 kubelet[2663]: I0513 23:38:44.357735 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7rbsk" podStartSLOduration=9.357717546 podStartE2EDuration="9.357717546s" podCreationTimestamp="2025-05-13 23:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:38:40.930851309 +0000 UTC m=+82.414428468" watchObservedRunningTime="2025-05-13 23:38:44.357717546 +0000 UTC m=+85.841294705" May 13 23:38:44.976974 systemd-networkd[1400]: lxc_health: Gained IPv6LL May 13 23:38:48.995707 sshd[4509]: Connection closed by 10.0.0.1 port 41668 May 13 23:38:48.996498 sshd-session[4501]: pam_unix(sshd:session): session closed for user core May 13 23:38:48.999708 systemd[1]: sshd@25-10.0.0.79:22-10.0.0.1:41668.service: Deactivated successfully. May 13 23:38:49.001500 systemd[1]: session-26.scope: Deactivated successfully. May 13 23:38:49.003592 systemd-logind[1466]: Session 26 logged out. Waiting for processes to exit. May 13 23:38:49.004507 systemd-logind[1466]: Removed session 26.