Feb 13 19:16:54.895701 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:16:54.895722 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:16:54.895736 kernel: KASLR enabled Feb 13 19:16:54.895742 kernel: efi: EFI v2.7 by EDK II Feb 13 19:16:54.895748 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:16:54.895753 kernel: random: crng init done Feb 13 19:16:54.895760 kernel: secureboot: Secure boot disabled Feb 13 19:16:54.895766 kernel: ACPI: Early table checksum verification disabled Feb 13 19:16:54.895772 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:16:54.895779 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:16:54.895785 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895791 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895797 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895803 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895810 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895817 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895823 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895830 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895836 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:16:54.895842 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:16:54.895848 kernel: NUMA: Failed to initialise from firmware Feb 13 19:16:54.895854 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:16:54.895860 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Feb 13 19:16:54.895866 kernel: Zone ranges: Feb 13 19:16:54.895872 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:16:54.895880 kernel: DMA32 empty Feb 13 19:16:54.895886 kernel: Normal empty Feb 13 19:16:54.895892 kernel: Movable zone start for each node Feb 13 19:16:54.895898 kernel: Early memory node ranges Feb 13 19:16:54.895904 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:16:54.895910 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:16:54.895916 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:16:54.895922 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:16:54.895928 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:16:54.895934 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:16:54.895940 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:16:54.895946 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:16:54.895953 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:16:54.895959 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:16:54.895966 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:16:54.895974 kernel: psci: probing for conduit method from ACPI. Feb 13 19:16:54.895981 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:16:54.895987 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:16:54.896049 kernel: psci: Trusted OS migration not required Feb 13 19:16:54.896056 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:16:54.896063 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:16:54.896069 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:16:54.896076 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:16:54.896082 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:16:54.896089 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:16:54.896095 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:16:54.896109 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:16:54.896115 kernel: CPU features: detected: Spectre-v4 Feb 13 19:16:54.896123 kernel: CPU features: detected: Spectre-BHB Feb 13 19:16:54.896130 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:16:54.896137 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:16:54.896143 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:16:54.896150 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:16:54.896156 kernel: alternatives: applying boot alternatives Feb 13 19:16:54.896163 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:16:54.896170 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:16:54.896177 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:16:54.896183 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:16:54.896190 kernel: Fallback order for Node 0: 0 Feb 13 19:16:54.896197 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:16:54.896204 kernel: Policy zone: DMA Feb 13 19:16:54.896210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:16:54.896217 kernel: software IO TLB: area num 4. Feb 13 19:16:54.896223 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:16:54.896230 kernel: Memory: 2387548K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184740K reserved, 0K cma-reserved) Feb 13 19:16:54.896237 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:16:54.896243 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:16:54.896250 kernel: rcu: RCU event tracing is enabled. Feb 13 19:16:54.896257 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:16:54.896264 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:16:54.896270 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:16:54.896278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:16:54.896285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:16:54.896291 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:16:54.896298 kernel: GICv3: 256 SPIs implemented Feb 13 19:16:54.896304 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:16:54.896311 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:16:54.896317 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:16:54.896323 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:16:54.896330 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:16:54.896337 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:16:54.896343 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:16:54.896351 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:16:54.896358 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:16:54.896364 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:16:54.896371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:16:54.896377 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:16:54.896384 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:16:54.896391 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:16:54.896397 kernel: arm-pv: using stolen time PV Feb 13 19:16:54.896404 kernel: Console: colour dummy device 80x25 Feb 13 19:16:54.896411 kernel: ACPI: Core revision 20230628 Feb 13 19:16:54.896418 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:16:54.896425 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:16:54.896432 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:16:54.896439 kernel: landlock: Up and running. Feb 13 19:16:54.896445 kernel: SELinux: Initializing. Feb 13 19:16:54.896452 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:16:54.896459 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:16:54.896466 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:16:54.896473 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:16:54.896479 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:16:54.896488 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:16:54.896494 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:16:54.896501 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:16:54.896508 kernel: Remapping and enabling EFI services. Feb 13 19:16:54.896515 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:16:54.896524 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:16:54.896531 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:16:54.896538 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:16:54.896546 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:16:54.896556 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:16:54.896563 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:16:54.896575 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:16:54.896583 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:16:54.896591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:16:54.896598 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:16:54.896605 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:16:54.896614 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:16:54.896621 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:16:54.896630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:16:54.896637 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:16:54.896645 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:16:54.896656 kernel: SMP: Total of 4 processors activated. Feb 13 19:16:54.896666 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:16:54.896673 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:16:54.896683 kernel: CPU features: detected: Common not Private translations Feb 13 19:16:54.896692 kernel: CPU features: detected: CRC32 instructions Feb 13 19:16:54.896701 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:16:54.896708 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:16:54.896715 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:16:54.896722 kernel: CPU features: detected: Privileged Access Never Feb 13 19:16:54.896729 kernel: CPU features: detected: RAS Extension Support Feb 13 19:16:54.896736 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:16:54.896743 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:16:54.896750 kernel: alternatives: applying system-wide alternatives Feb 13 19:16:54.896757 kernel: devtmpfs: initialized Feb 13 19:16:54.896764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:16:54.896773 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:16:54.896780 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:16:54.896787 kernel: SMBIOS 3.0.0 present. Feb 13 19:16:54.896794 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:16:54.896801 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:16:54.896808 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:16:54.896815 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:16:54.896822 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:16:54.896831 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:16:54.896838 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:16:54.896845 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:16:54.896852 kernel: cpuidle: using governor menu Feb 13 19:16:54.896859 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:16:54.896866 kernel: ASID allocator initialised with 32768 entries Feb 13 19:16:54.896873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:16:54.896880 kernel: Serial: AMBA PL011 UART driver Feb 13 19:16:54.896887 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:16:54.896895 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:16:54.896902 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:16:54.896909 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:16:54.896916 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:16:54.896923 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:16:54.896930 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:16:54.896937 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:16:54.896944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:16:54.896951 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:16:54.896959 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:16:54.896966 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:16:54.896973 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:16:54.896980 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:16:54.896987 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:16:54.896999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:16:54.897006 kernel: ACPI: Interpreter enabled Feb 13 19:16:54.897013 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:16:54.897020 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:16:54.897027 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:16:54.897036 kernel: printk: console [ttyAMA0] enabled Feb 13 19:16:54.897043 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:16:54.897178 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:16:54.897253 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:16:54.897318 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:16:54.897382 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:16:54.897445 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:16:54.897457 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:16:54.897465 kernel: PCI host bridge to bus 0000:00 Feb 13 19:16:54.897533 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:16:54.897592 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:16:54.897651 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:16:54.897714 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:16:54.897797 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:16:54.897874 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:16:54.897942 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:16:54.898021 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:16:54.898113 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:16:54.898195 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:16:54.898260 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:16:54.898325 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:16:54.898388 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:16:54.898445 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:16:54.898502 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:16:54.898512 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:16:54.898519 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:16:54.898526 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:16:54.898533 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:16:54.898542 kernel: iommu: Default domain type: Translated Feb 13 19:16:54.898550 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:16:54.898557 kernel: efivars: Registered efivars operations Feb 13 19:16:54.898564 kernel: vgaarb: loaded Feb 13 19:16:54.898571 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:16:54.898578 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:16:54.898585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:16:54.898592 kernel: pnp: PnP ACPI init Feb 13 19:16:54.898666 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:16:54.898678 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:16:54.898685 kernel: NET: Registered PF_INET protocol family Feb 13 19:16:54.898692 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:16:54.898699 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:16:54.898707 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:16:54.898714 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:16:54.898721 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:16:54.898728 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:16:54.898737 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:16:54.898744 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:16:54.898751 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:16:54.898758 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:16:54.898764 kernel: kvm [1]: HYP mode not available Feb 13 19:16:54.898771 kernel: Initialise system trusted keyrings Feb 13 19:16:54.898778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:16:54.898785 kernel: Key type asymmetric registered Feb 13 19:16:54.898792 kernel: Asymmetric key parser 'x509' registered Feb 13 19:16:54.898799 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:16:54.898807 kernel: io scheduler mq-deadline registered Feb 13 19:16:54.898814 kernel: io scheduler kyber registered Feb 13 19:16:54.898821 kernel: io scheduler bfq registered Feb 13 19:16:54.898828 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:16:54.898835 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:16:54.898843 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:16:54.898912 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:16:54.898922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:16:54.898929 kernel: thunder_xcv, ver 1.0 Feb 13 19:16:54.898938 kernel: thunder_bgx, ver 1.0 Feb 13 19:16:54.898945 kernel: nicpf, ver 1.0 Feb 13 19:16:54.898952 kernel: nicvf, ver 1.0 Feb 13 19:16:54.899032 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:16:54.899096 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:16:54 UTC (1739474214) Feb 13 19:16:54.899111 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:16:54.899119 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:16:54.899126 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:16:54.899135 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:16:54.899142 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:16:54.899149 kernel: Segment Routing with IPv6 Feb 13 19:16:54.899156 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:16:54.899163 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:16:54.899170 kernel: Key type dns_resolver registered Feb 13 19:16:54.899177 kernel: registered taskstats version 1 Feb 13 19:16:54.899184 kernel: Loading compiled-in X.509 certificates Feb 13 19:16:54.899191 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:16:54.899200 kernel: Key type .fscrypt registered Feb 13 19:16:54.899206 kernel: Key type fscrypt-provisioning registered Feb 13 19:16:54.899214 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:16:54.899221 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:16:54.899228 kernel: ima: No architecture policies found Feb 13 19:16:54.899235 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:16:54.899242 kernel: clk: Disabling unused clocks Feb 13 19:16:54.899249 kernel: Freeing unused kernel memory: 38336K Feb 13 19:16:54.899256 kernel: Run /init as init process Feb 13 19:16:54.899264 kernel: with arguments: Feb 13 19:16:54.899271 kernel: /init Feb 13 19:16:54.899277 kernel: with environment: Feb 13 19:16:54.899284 kernel: HOME=/ Feb 13 19:16:54.899291 kernel: TERM=linux Feb 13 19:16:54.899298 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:16:54.899306 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:16:54.899321 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:16:54.899340 systemd[1]: Detected virtualization kvm. Feb 13 19:16:54.899347 systemd[1]: Detected architecture arm64. Feb 13 19:16:54.899354 systemd[1]: Running in initrd. Feb 13 19:16:54.899361 systemd[1]: No hostname configured, using default hostname. Feb 13 19:16:54.899369 systemd[1]: Hostname set to . Feb 13 19:16:54.899377 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:16:54.899384 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:16:54.899396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:16:54.899405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:16:54.899413 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:16:54.899421 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:16:54.899429 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:16:54.899437 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:16:54.899446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:16:54.899455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:16:54.899462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:16:54.899470 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:16:54.899477 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:16:54.899485 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:16:54.899493 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:16:54.899500 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:16:54.899508 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:16:54.899515 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:16:54.899524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:16:54.899532 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:16:54.899540 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:16:54.899548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:16:54.899555 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:16:54.899563 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:16:54.899570 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:16:54.899578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:16:54.899587 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:16:54.899594 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:16:54.899602 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:16:54.899610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:16:54.899619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:16:54.899627 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:16:54.899638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:16:54.899650 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:16:54.899677 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:16:54.899697 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:16:54.899705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:16:54.899712 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:16:54.899720 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:16:54.899728 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:16:54.899735 kernel: Bridge firewalling registered Feb 13 19:16:54.899743 systemd-journald[239]: Journal started Feb 13 19:16:54.899761 systemd-journald[239]: Runtime Journal (/run/log/journal/7db411dbabd645878d5bbde769c25dc6) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:16:54.885440 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:16:54.900113 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:16:54.902255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:16:54.904433 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:16:54.904951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:16:54.909557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:16:54.917114 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:16:54.919268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:16:54.921353 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:16:54.923482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:16:54.924476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:16:54.930519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:16:54.937049 dracut-cmdline[276]: dracut-dracut-053 Feb 13 19:16:54.940381 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:16:54.939183 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:16:54.971086 systemd-resolved[284]: Positive Trust Anchors: Feb 13 19:16:54.971110 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:16:54.971140 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:16:54.975679 systemd-resolved[284]: Defaulting to hostname 'linux'. Feb 13 19:16:54.976611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:16:54.978508 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:16:55.005008 kernel: SCSI subsystem initialized Feb 13 19:16:55.009023 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:16:55.018041 kernel: iscsi: registered transport (tcp) Feb 13 19:16:55.029010 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:16:55.029031 kernel: QLogic iSCSI HBA Driver Feb 13 19:16:55.068044 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:16:55.078185 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:16:55.092430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:16:55.092473 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:16:55.093207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:16:55.138016 kernel: raid6: neonx8 gen() 15722 MB/s Feb 13 19:16:55.155002 kernel: raid6: neonx4 gen() 15760 MB/s Feb 13 19:16:55.172009 kernel: raid6: neonx2 gen() 13149 MB/s Feb 13 19:16:55.189008 kernel: raid6: neonx1 gen() 10454 MB/s Feb 13 19:16:55.206016 kernel: raid6: int64x8 gen() 6761 MB/s Feb 13 19:16:55.223014 kernel: raid6: int64x4 gen() 7321 MB/s Feb 13 19:16:55.240005 kernel: raid6: int64x2 gen() 6105 MB/s Feb 13 19:16:55.257015 kernel: raid6: int64x1 gen() 5052 MB/s Feb 13 19:16:55.257036 kernel: raid6: using algorithm neonx4 gen() 15760 MB/s Feb 13 19:16:55.274009 kernel: raid6: .... xor() 12367 MB/s, rmw enabled Feb 13 19:16:55.274020 kernel: raid6: using neon recovery algorithm Feb 13 19:16:55.279346 kernel: xor: measuring software checksum speed Feb 13 19:16:55.279365 kernel: 8regs : 21653 MB/sec Feb 13 19:16:55.279374 kernel: 32regs : 21699 MB/sec Feb 13 19:16:55.280281 kernel: arm64_neon : 27965 MB/sec Feb 13 19:16:55.280291 kernel: xor: using function: arm64_neon (27965 MB/sec) Feb 13 19:16:55.329802 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:16:55.340413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:16:55.350138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:16:55.364199 systemd-udevd[466]: Using default interface naming scheme 'v255'. Feb 13 19:16:55.367778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:16:55.378138 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:16:55.388723 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Feb 13 19:16:55.412860 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:16:55.421153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:16:55.459971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:16:55.470133 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:16:55.483275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:16:55.484551 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:16:55.485959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:16:55.487905 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:16:55.497125 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:16:55.506968 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:16:55.521014 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:16:55.526295 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:16:55.526582 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:16:55.526596 kernel: GPT:9289727 != 19775487 Feb 13 19:16:55.526605 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:16:55.526614 kernel: GPT:9289727 != 19775487 Feb 13 19:16:55.526622 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:16:55.526631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:16:55.523241 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:16:55.523345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:16:55.524414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:16:55.525170 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:16:55.525345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:16:55.526325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:16:55.540250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:16:55.552022 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (514) Feb 13 19:16:55.553030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:16:55.555645 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (517) Feb 13 19:16:55.562124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:16:55.573856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:16:55.585190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:16:55.591099 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:16:55.591954 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:16:55.601153 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:16:55.602631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:16:55.607350 disk-uuid[555]: Primary Header is updated. Feb 13 19:16:55.607350 disk-uuid[555]: Secondary Entries is updated. Feb 13 19:16:55.607350 disk-uuid[555]: Secondary Header is updated. Feb 13 19:16:55.611019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:16:55.626326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:16:56.619014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:16:56.620155 disk-uuid[556]: The operation has completed successfully. Feb 13 19:16:56.643235 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:16:56.643342 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:16:56.677141 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:16:56.679726 sh[575]: Success Feb 13 19:16:56.693028 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:16:56.719613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:16:56.727039 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:16:56.729022 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:16:56.738647 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:16:56.738676 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:16:56.738687 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:16:56.738696 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:16:56.740012 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:16:56.743543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:16:56.744342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:16:56.763205 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:16:56.764488 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:16:56.773323 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:16:56.773363 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:16:56.773381 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:16:56.775730 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:16:56.782015 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:16:56.787457 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:16:56.792150 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:16:56.852199 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:16:56.861144 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:16:56.872198 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:16:56.888835 ignition[669]: Ignition 2.20.0 Feb 13 19:16:56.888844 ignition[669]: Stage: fetch-offline Feb 13 19:16:56.888877 ignition[669]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:56.888886 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:56.889111 ignition[669]: parsed url from cmdline: "" Feb 13 19:16:56.889114 ignition[669]: no config URL provided Feb 13 19:16:56.889119 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:16:56.889126 ignition[669]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:16:56.889146 ignition[669]: op(1): [started] loading QEMU firmware config module Feb 13 19:16:56.889151 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:16:56.894313 ignition[669]: op(1): [finished] loading QEMU firmware config module Feb 13 19:16:56.899758 systemd-networkd[767]: lo: Link UP Feb 13 19:16:56.899762 systemd-networkd[767]: lo: Gained carrier Feb 13 19:16:56.900569 systemd-networkd[767]: Enumeration completed Feb 13 19:16:56.900718 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:16:56.902154 systemd[1]: Reached target network.target - Network. Feb 13 19:16:56.903674 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:16:56.903677 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:16:56.904371 systemd-networkd[767]: eth0: Link UP Feb 13 19:16:56.904374 systemd-networkd[767]: eth0: Gained carrier Feb 13 19:16:56.904380 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:16:56.919043 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:16:56.940444 ignition[669]: parsing config with SHA512: b6861ff1e264aa4c9ee5316f97fa204527cd78528cd5c8d73400d956d1cc49b98154c150a944d386d26c89c24f33fc4e0b1e921c5546d730054e164bbb290b8a Feb 13 19:16:56.946622 unknown[669]: fetched base config from "system" Feb 13 19:16:56.946632 unknown[669]: fetched user config from "qemu" Feb 13 19:16:56.947087 ignition[669]: fetch-offline: fetch-offline passed Feb 13 19:16:56.948768 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:16:56.947158 ignition[669]: Ignition finished successfully Feb 13 19:16:56.950027 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:16:56.963135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:16:56.974843 ignition[774]: Ignition 2.20.0 Feb 13 19:16:56.974851 ignition[774]: Stage: kargs Feb 13 19:16:56.975028 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:56.975037 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:56.975891 ignition[774]: kargs: kargs passed Feb 13 19:16:56.978511 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:16:56.975930 ignition[774]: Ignition finished successfully Feb 13 19:16:56.991188 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:16:57.000017 ignition[783]: Ignition 2.20.0 Feb 13 19:16:57.000029 ignition[783]: Stage: disks Feb 13 19:16:57.000187 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:57.000196 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:57.001011 ignition[783]: disks: disks passed Feb 13 19:16:57.001050 ignition[783]: Ignition finished successfully Feb 13 19:16:57.003054 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:16:57.003979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:16:57.005153 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:16:57.006702 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:16:57.008012 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:16:57.009475 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:16:57.023121 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:16:57.033981 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:16:57.037396 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:16:57.054099 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:16:57.092012 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:16:57.092664 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:16:57.093636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:16:57.109074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:16:57.110513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:16:57.111727 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:16:57.111763 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:16:57.116780 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (802) Feb 13 19:16:57.111784 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:16:57.120118 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:16:57.120138 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:16:57.120148 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:16:57.115734 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:16:57.122096 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:16:57.119557 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:16:57.123089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:16:57.158280 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:16:57.161300 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:16:57.164626 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:16:57.167220 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:16:57.235227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:16:57.255125 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:16:57.257289 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:16:57.262011 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:16:57.273848 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:16:57.278360 ignition[915]: INFO : Ignition 2.20.0 Feb 13 19:16:57.278360 ignition[915]: INFO : Stage: mount Feb 13 19:16:57.279524 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:57.279524 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:57.279524 ignition[915]: INFO : mount: mount passed Feb 13 19:16:57.279524 ignition[915]: INFO : Ignition finished successfully Feb 13 19:16:57.280659 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:16:57.289117 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:16:57.872985 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:16:57.882218 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:16:57.888298 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Feb 13 19:16:57.888329 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:16:57.888340 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:16:57.889465 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:16:57.892024 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:16:57.892319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:16:57.906804 ignition[945]: INFO : Ignition 2.20.0 Feb 13 19:16:57.906804 ignition[945]: INFO : Stage: files Feb 13 19:16:57.908005 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:57.908005 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:57.908005 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:16:57.910530 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:16:57.910530 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:16:57.913238 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:16:57.914245 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:16:57.914245 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:16:57.913681 unknown[945]: wrote ssh authorized keys file for user: core Feb 13 19:16:57.916973 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:16:57.916973 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:16:57.969061 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:16:58.280076 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:16:58.280076 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:16:58.282811 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:16:58.506118 systemd-networkd[767]: eth0: Gained IPv6LL Feb 13 19:16:58.677827 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:16:58.744476 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:16:58.744476 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:16:58.747274 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:16:58.918821 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:16:59.172645 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:16:59.172645 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:16:59.175584 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:16:59.197791 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:16:59.200908 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:16:59.202031 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:16:59.202031 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:16:59.202031 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:16:59.202031 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:16:59.202031 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:16:59.202031 ignition[945]: INFO : files: files passed Feb 13 19:16:59.202031 ignition[945]: INFO : Ignition finished successfully Feb 13 19:16:59.203507 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:16:59.212152 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:16:59.213718 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:16:59.218845 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:16:59.218929 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:16:59.224355 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:16:59.227409 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:16:59.227409 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:16:59.230464 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:16:59.232020 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:16:59.234023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:16:59.250162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:16:59.270939 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:16:59.271092 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:16:59.273240 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:16:59.274350 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:16:59.276083 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:16:59.276980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:16:59.292165 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:16:59.295243 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:16:59.304689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:16:59.305686 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:16:59.307541 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:16:59.309283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:16:59.309398 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:16:59.311718 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:16:59.312542 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:16:59.315523 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:16:59.317939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:16:59.319802 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:16:59.321520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:16:59.323312 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:16:59.324917 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:16:59.326287 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:16:59.327749 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:16:59.328885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:16:59.329014 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:16:59.330838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:16:59.332236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:16:59.333644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:16:59.337047 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:16:59.337960 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:16:59.338090 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:16:59.340205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:16:59.340317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:16:59.341889 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:16:59.343147 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:16:59.347084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:16:59.348432 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:16:59.350462 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:16:59.352827 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:16:59.352980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:16:59.355216 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:16:59.355353 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:16:59.358852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:16:59.358962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:16:59.362377 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:16:59.362480 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:16:59.378204 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:16:59.378932 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:16:59.379091 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:16:59.381536 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:16:59.382928 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:16:59.383100 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:16:59.384977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:16:59.385185 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:16:59.388805 ignition[999]: INFO : Ignition 2.20.0 Feb 13 19:16:59.388805 ignition[999]: INFO : Stage: umount Feb 13 19:16:59.388805 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:16:59.388805 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:16:59.391936 ignition[999]: INFO : umount: umount passed Feb 13 19:16:59.391936 ignition[999]: INFO : Ignition finished successfully Feb 13 19:16:59.391648 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:16:59.391761 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:16:59.393286 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:16:59.393355 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:16:59.396921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:16:59.397297 systemd[1]: Stopped target network.target - Network. Feb 13 19:16:59.400312 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:16:59.400371 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:16:59.402165 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:16:59.402215 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:16:59.403741 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:16:59.403784 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:16:59.405703 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:16:59.405748 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:16:59.407756 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:16:59.409184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:16:59.412711 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:16:59.412818 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:16:59.415803 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:16:59.416082 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:16:59.416124 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:16:59.421049 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:16:59.423597 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:16:59.423699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:16:59.427082 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:16:59.427264 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:16:59.427292 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:16:59.439105 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:16:59.440029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:16:59.440108 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:16:59.441924 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:16:59.441969 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:16:59.444412 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:16:59.444455 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:16:59.446108 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:16:59.450034 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:16:59.456258 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:16:59.456390 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:16:59.458575 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:16:59.458663 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:16:59.460243 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:16:59.460309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:16:59.461850 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:16:59.461881 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:16:59.463533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:16:59.463583 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:16:59.465438 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:16:59.465482 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:16:59.467193 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:16:59.467241 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:16:59.483188 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:16:59.484285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:16:59.484349 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:16:59.487106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:16:59.487149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:16:59.490249 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:16:59.490353 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:16:59.491941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:16:59.492034 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:16:59.494282 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:16:59.495791 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:16:59.495853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:16:59.498263 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:16:59.507212 systemd[1]: Switching root. Feb 13 19:16:59.532733 systemd-journald[239]: Journal stopped Feb 13 19:17:00.305326 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:17:00.305386 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:17:00.305399 kernel: SELinux: policy capability open_perms=1 Feb 13 19:17:00.305409 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:17:00.305419 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:17:00.305429 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:17:00.305439 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:17:00.305449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:17:00.305463 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:17:00.305473 kernel: audit: type=1403 audit(1739474219.711:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:17:00.305484 systemd[1]: Successfully loaded SELinux policy in 54.529ms. Feb 13 19:17:00.305509 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.495ms. Feb 13 19:17:00.305521 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:17:00.305532 systemd[1]: Detected virtualization kvm. Feb 13 19:17:00.305543 systemd[1]: Detected architecture arm64. Feb 13 19:17:00.305554 systemd[1]: Detected first boot. Feb 13 19:17:00.305564 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:17:00.305577 zram_generator::config[1045]: No configuration found. Feb 13 19:17:00.305590 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:17:00.305601 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:17:00.305612 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:17:00.305623 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:17:00.305634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:17:00.305645 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:00.305656 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:17:00.305669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:17:00.305681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:17:00.305692 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:17:00.305704 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:17:00.305716 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:17:00.305728 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:17:00.305738 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:17:00.305749 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:00.305760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:00.305773 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:17:00.305784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:17:00.305795 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:17:00.305805 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:00.305816 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:17:00.305828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:00.305843 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:17:00.305861 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:17:00.305872 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:00.305883 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:17:00.305894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:00.305906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:00.305917 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:00.305928 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:00.305939 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:17:00.305950 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:17:00.305963 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:17:00.305975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:00.305985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:00.306004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:00.306016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:17:00.306027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:17:00.306038 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:17:00.306049 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:17:00.306065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:17:00.306076 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:17:00.306090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:17:00.306102 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:17:00.306113 systemd[1]: Reached target machines.target - Containers. Feb 13 19:17:00.306124 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:17:00.306136 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:00.306147 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:00.306172 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:17:00.306193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:00.306205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:00.306216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:00.306227 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:17:00.306237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:00.306249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:17:00.306260 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:17:00.306270 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:17:00.306281 kernel: fuse: init (API version 7.39) Feb 13 19:17:00.306293 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:17:00.306305 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:17:00.306316 kernel: ACPI: bus type drm_connector registered Feb 13 19:17:00.306327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:00.306338 kernel: loop: module loaded Feb 13 19:17:00.306349 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:00.306360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:00.306370 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:17:00.306381 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:17:00.306394 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:17:00.306405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:00.306415 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:17:00.306434 systemd[1]: Stopped verity-setup.service. Feb 13 19:17:00.306444 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:17:00.306457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:17:00.306469 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:17:00.306500 systemd-journald[1121]: Collecting audit messages is disabled. Feb 13 19:17:00.306523 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:17:00.306576 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:17:00.306587 systemd-journald[1121]: Journal started Feb 13 19:17:00.306612 systemd-journald[1121]: Runtime Journal (/run/log/journal/7db411dbabd645878d5bbde769c25dc6) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:17:00.101365 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:17:00.117850 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:17:00.118264 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:17:00.309465 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:00.310088 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:17:00.311233 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:17:00.312497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:00.313830 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:17:00.314013 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:17:00.315245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:00.315411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:00.316729 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:00.316900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:00.318207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:00.318377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:00.319777 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:17:00.319958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:17:00.321165 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:00.321334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:00.322590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:00.323854 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:17:00.325247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:17:00.326683 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:17:00.339982 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:17:00.348115 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:17:00.350025 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:17:00.350942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:17:00.350975 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:00.352829 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:17:00.354875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:17:00.356865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:17:00.357876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:00.358970 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:17:00.362185 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:17:00.363246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:00.365244 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:17:00.366242 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:00.367449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:00.372261 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:17:00.372403 systemd-journald[1121]: Time spent on flushing to /var/log/journal/7db411dbabd645878d5bbde769c25dc6 is 11.549ms for 870 entries. Feb 13 19:17:00.372403 systemd-journald[1121]: System Journal (/var/log/journal/7db411dbabd645878d5bbde769c25dc6) is 8M, max 195.6M, 187.6M free. Feb 13 19:17:00.389143 systemd-journald[1121]: Received client request to flush runtime journal. Feb 13 19:17:00.380188 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:17:00.385026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:00.386265 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:17:00.388423 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:17:00.389777 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:17:00.391498 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:17:00.393533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:17:00.394553 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:17:00.401929 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:17:00.409182 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:17:00.411063 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:17:00.415554 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:17:00.419634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:00.433034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:17:00.434510 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:17:00.437211 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:17:00.438632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:00.443301 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:17:00.464312 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:17:00.464329 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:17:00.469097 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 19:17:00.470893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:00.499018 kernel: loop3: detected capacity change from 0 to 123192 Feb 13 19:17:00.504029 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 19:17:00.508207 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 19:17:00.512251 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:17:00.512891 (sd-merge)[1189]: Merged extensions into '/usr'. Feb 13 19:17:00.516832 systemd[1]: Reload requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:17:00.516868 systemd[1]: Reloading... Feb 13 19:17:00.576464 zram_generator::config[1217]: No configuration found. Feb 13 19:17:00.645086 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:17:00.676563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:00.727120 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:17:00.727407 systemd[1]: Reloading finished in 210 ms. Feb 13 19:17:00.745788 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:17:00.747162 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:17:00.761309 systemd[1]: Starting ensure-sysext.service... Feb 13 19:17:00.763024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:00.773982 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:17:00.774005 systemd[1]: Reloading... Feb 13 19:17:00.780141 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:17:00.780629 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:17:00.781416 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:17:00.781713 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 19:17:00.781829 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 19:17:00.784441 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:00.784548 systemd-tmpfiles[1252]: Skipping /boot Feb 13 19:17:00.792791 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:00.792897 systemd-tmpfiles[1252]: Skipping /boot Feb 13 19:17:00.822011 zram_generator::config[1281]: No configuration found. Feb 13 19:17:00.902969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:00.953745 systemd[1]: Reloading finished in 179 ms. Feb 13 19:17:00.972024 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:17:00.989031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:00.996871 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:00.999417 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:17:01.001614 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:17:01.005347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:01.009369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:01.014348 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:17:01.020891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:01.028134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:01.030376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:01.035256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:01.038262 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:01.038400 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:01.041139 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:17:01.042597 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:17:01.044053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:01.044247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:01.045701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:01.045865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:01.047269 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:01.047437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:01.056517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:01.066340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:01.068249 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Feb 13 19:17:01.071371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:01.074701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:01.075790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:01.075965 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:01.079308 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:17:01.080967 augenrules[1354]: No rules Feb 13 19:17:01.082356 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:17:01.083199 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:17:01.085159 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:01.087072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:01.088710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:01.090919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:01.091238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:01.092663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:01.092814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:01.094396 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:01.094536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:01.096030 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:17:01.098916 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:17:01.111025 systemd[1]: Finished ensure-sysext.service. Feb 13 19:17:01.123155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:01.123966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:01.129256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:01.132806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:01.139183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:01.143487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:01.147267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:01.147319 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:01.151207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:01.159198 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:17:01.160291 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:17:01.161777 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:17:01.165072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Feb 13 19:17:01.165457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:01.165607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:01.167067 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:01.167227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:01.171377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:01.171540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:01.173050 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:01.173216 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:01.182957 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:17:01.185172 augenrules[1386]: /sbin/augenrules: No change Feb 13 19:17:01.187543 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:01.187598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:01.198749 augenrules[1423]: No rules Feb 13 19:17:01.205448 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:01.205855 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:01.227272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:17:01.248311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:17:01.250669 systemd-resolved[1320]: Positive Trust Anchors: Feb 13 19:17:01.250687 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:01.250719 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:01.261199 systemd-resolved[1320]: Defaulting to hostname 'linux'. Feb 13 19:17:01.271176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:01.272299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:01.275548 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:17:01.289060 systemd-networkd[1402]: lo: Link UP Feb 13 19:17:01.289067 systemd-networkd[1402]: lo: Gained carrier Feb 13 19:17:01.289890 systemd-networkd[1402]: Enumeration completed Feb 13 19:17:01.291110 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:01.292306 systemd[1]: Reached target network.target - Network. Feb 13 19:17:01.295260 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:01.295269 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:17:01.295756 systemd-networkd[1402]: eth0: Link UP Feb 13 19:17:01.295759 systemd-networkd[1402]: eth0: Gained carrier Feb 13 19:17:01.295771 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:01.299280 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:17:01.301268 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:17:01.302395 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:17:01.305965 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:17:01.308031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:01.318923 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:17:01.321123 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:17:01.321816 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Feb 13 19:17:01.322732 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:17:01.322778 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-02-13 19:17:01.469122 UTC. Feb 13 19:17:01.331236 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:17:01.332867 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:17:01.343913 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:01.350425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:01.388038 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:17:01.389285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:01.390263 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:01.391194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:17:01.392136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:17:01.393245 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:17:01.394371 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:17:01.395382 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:17:01.396347 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:17:01.396376 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:01.397116 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:01.398545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:17:01.400811 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:17:01.403806 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:17:01.405115 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:17:01.406155 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:17:01.409064 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:17:01.410548 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:17:01.412557 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:17:01.413875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:17:01.414887 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:01.415750 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:01.416562 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:01.416595 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:01.417516 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:17:01.419419 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:17:01.421055 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:01.424172 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:17:01.427112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:17:01.431235 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:17:01.432325 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:17:01.433189 jq[1455]: false Feb 13 19:17:01.434103 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:17:01.437236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:17:01.441136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:17:01.446319 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:17:01.448910 extend-filesystems[1456]: Found loop3 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found loop4 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found loop5 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda1 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda2 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda3 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found usr Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda4 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda6 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda7 Feb 13 19:17:01.448910 extend-filesystems[1456]: Found vda9 Feb 13 19:17:01.448910 extend-filesystems[1456]: Checking size of /dev/vda9 Feb 13 19:17:01.449686 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:17:01.456744 dbus-daemon[1454]: [system] SELinux support is enabled Feb 13 19:17:01.450210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:17:01.451324 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:17:01.458175 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:17:01.460108 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:17:01.464025 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:17:01.467373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:17:01.467556 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:17:01.471749 jq[1474]: true Feb 13 19:17:01.472209 extend-filesystems[1456]: Resized partition /dev/vda9 Feb 13 19:17:01.473347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:17:01.473534 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:17:01.477513 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:17:01.481874 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:17:01.482002 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:17:01.483178 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:17:01.485968 jq[1480]: true Feb 13 19:17:01.498346 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:17:01.498409 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:17:01.502816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:17:01.502844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:17:01.507873 tar[1477]: linux-arm64/helm Feb 13 19:17:01.521721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Feb 13 19:17:01.521756 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:17:01.511082 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:17:01.523643 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:17:01.523643 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:17:01.523643 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:17:01.523461 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:17:01.530751 extend-filesystems[1456]: Resized filesystem in /dev/vda9 Feb 13 19:17:01.525511 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:17:01.545139 update_engine[1468]: I20250213 19:17:01.543918 1468 main.cc:92] Flatcar Update Engine starting Feb 13 19:17:01.549533 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:17:01.551392 systemd-logind[1464]: New seat seat0. Feb 13 19:17:01.552065 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:17:01.553255 update_engine[1468]: I20250213 19:17:01.553200 1468 update_check_scheduler.cc:74] Next update check in 4m40s Feb 13 19:17:01.554010 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:17:01.566832 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:17:01.584274 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:17:01.586357 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:17:01.588517 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:17:01.603134 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:17:01.733453 containerd[1487]: time="2025-02-13T19:17:01.733319000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:17:01.764143 containerd[1487]: time="2025-02-13T19:17:01.764090040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.765742 containerd[1487]: time="2025-02-13T19:17:01.765680480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:01.765742 containerd[1487]: time="2025-02-13T19:17:01.765717960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:17:01.765742 containerd[1487]: time="2025-02-13T19:17:01.765734280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:17:01.765907 containerd[1487]: time="2025-02-13T19:17:01.765890520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:17:01.765932 containerd[1487]: time="2025-02-13T19:17:01.765910800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.765980 containerd[1487]: time="2025-02-13T19:17:01.765967080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766014 containerd[1487]: time="2025-02-13T19:17:01.765980520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766222 containerd[1487]: time="2025-02-13T19:17:01.766195800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766222 containerd[1487]: time="2025-02-13T19:17:01.766215840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766275 containerd[1487]: time="2025-02-13T19:17:01.766230320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766275 containerd[1487]: time="2025-02-13T19:17:01.766239440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766437 containerd[1487]: time="2025-02-13T19:17:01.766308920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766541 containerd[1487]: time="2025-02-13T19:17:01.766495080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766641 containerd[1487]: time="2025-02-13T19:17:01.766612840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:01.766641 containerd[1487]: time="2025-02-13T19:17:01.766628280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:17:01.766715 containerd[1487]: time="2025-02-13T19:17:01.766702360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:17:01.766776 containerd[1487]: time="2025-02-13T19:17:01.766742280Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:17:01.769971 containerd[1487]: time="2025-02-13T19:17:01.769939480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:17:01.770071 containerd[1487]: time="2025-02-13T19:17:01.769982480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:17:01.770071 containerd[1487]: time="2025-02-13T19:17:01.770005960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:17:01.770071 containerd[1487]: time="2025-02-13T19:17:01.770022320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:17:01.770071 containerd[1487]: time="2025-02-13T19:17:01.770036200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:17:01.770225 containerd[1487]: time="2025-02-13T19:17:01.770169400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:17:01.770415 containerd[1487]: time="2025-02-13T19:17:01.770398920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:17:01.770511 containerd[1487]: time="2025-02-13T19:17:01.770494720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:17:01.770547 containerd[1487]: time="2025-02-13T19:17:01.770513760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:17:01.770547 containerd[1487]: time="2025-02-13T19:17:01.770533000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:17:01.770583 containerd[1487]: time="2025-02-13T19:17:01.770561080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770602 containerd[1487]: time="2025-02-13T19:17:01.770582760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770602 containerd[1487]: time="2025-02-13T19:17:01.770594720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770641 containerd[1487]: time="2025-02-13T19:17:01.770606640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770641 containerd[1487]: time="2025-02-13T19:17:01.770620240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770641 containerd[1487]: time="2025-02-13T19:17:01.770633960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770708 containerd[1487]: time="2025-02-13T19:17:01.770645400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770708 containerd[1487]: time="2025-02-13T19:17:01.770659120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:17:01.770708 containerd[1487]: time="2025-02-13T19:17:01.770678280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770708 containerd[1487]: time="2025-02-13T19:17:01.770691440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770708 containerd[1487]: time="2025-02-13T19:17:01.770704160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770716120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770727760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770739880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770751160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770763920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770776240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770795 containerd[1487]: time="2025-02-13T19:17:01.770789760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770801000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770812080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770824160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770838080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770857240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770873640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.770913 containerd[1487]: time="2025-02-13T19:17:01.770883960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772004080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772177560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772201840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772264920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772276160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772296400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772310200Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:17:01.773076 containerd[1487]: time="2025-02-13T19:17:01.772320800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:17:01.773276 containerd[1487]: time="2025-02-13T19:17:01.772705560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:17:01.773276 containerd[1487]: time="2025-02-13T19:17:01.772758760Z" level=info msg="Connect containerd service" Feb 13 19:17:01.773276 containerd[1487]: time="2025-02-13T19:17:01.772800480Z" level=info msg="using legacy CRI server" Feb 13 19:17:01.773276 containerd[1487]: time="2025-02-13T19:17:01.772812600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:17:01.773578 containerd[1487]: time="2025-02-13T19:17:01.773543400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:17:01.775267 containerd[1487]: time="2025-02-13T19:17:01.775231200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:17:01.775556 containerd[1487]: time="2025-02-13T19:17:01.775507760Z" level=info msg="Start subscribing containerd event" Feb 13 19:17:01.775583 containerd[1487]: time="2025-02-13T19:17:01.775567680Z" level=info msg="Start recovering state" Feb 13 19:17:01.775644 containerd[1487]: time="2025-02-13T19:17:01.775628840Z" level=info msg="Start event monitor" Feb 13 19:17:01.775644 containerd[1487]: time="2025-02-13T19:17:01.775642200Z" level=info msg="Start snapshots syncer" Feb 13 19:17:01.775696 containerd[1487]: time="2025-02-13T19:17:01.775651320Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:17:01.775696 containerd[1487]: time="2025-02-13T19:17:01.775657680Z" level=info msg="Start streaming server" Feb 13 19:17:01.779007 containerd[1487]: time="2025-02-13T19:17:01.776481440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:17:01.779007 containerd[1487]: time="2025-02-13T19:17:01.776548120Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:17:01.779007 containerd[1487]: time="2025-02-13T19:17:01.776601000Z" level=info msg="containerd successfully booted in 0.045611s" Feb 13 19:17:01.776704 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:17:01.862816 tar[1477]: linux-arm64/LICENSE Feb 13 19:17:01.862913 tar[1477]: linux-arm64/README.md Feb 13 19:17:01.876767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:17:02.642942 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:17:02.661732 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:17:02.673272 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:17:02.678312 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:17:02.678561 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:17:02.681905 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:17:02.696112 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:17:02.698710 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:17:02.700796 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:17:02.701920 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:17:03.053347 systemd-networkd[1402]: eth0: Gained IPv6LL Feb 13 19:17:03.056504 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:17:03.058349 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:17:03.076300 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:17:03.078774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:03.080864 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:17:03.097365 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:17:03.097615 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:17:03.099339 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:17:03.110111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:17:03.579456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:03.581370 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:17:03.583611 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:17:03.584348 systemd[1]: Startup finished in 534ms (kernel) + 4.996s (initrd) + 3.930s (userspace) = 9.461s. Feb 13 19:17:04.008698 kubelet[1566]: E0213 19:17:04.008577 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:17:04.011287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:17:04.011432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:17:04.011760 systemd[1]: kubelet.service: Consumed 789ms CPU time, 234.6M memory peak. Feb 13 19:17:07.396592 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:17:07.411667 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:55858.service - OpenSSH per-connection server daemon (10.0.0.1:55858). Feb 13 19:17:07.471890 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 55858 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:07.473743 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:07.484072 systemd-logind[1464]: New session 1 of user core. Feb 13 19:17:07.485041 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:17:07.495629 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:17:07.509431 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:17:07.520370 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:17:07.522766 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:17:07.524893 systemd-logind[1464]: New session c1 of user core. Feb 13 19:17:07.626329 systemd[1583]: Queued start job for default target default.target. Feb 13 19:17:07.635902 systemd[1583]: Created slice app.slice - User Application Slice. Feb 13 19:17:07.635922 systemd[1583]: Reached target paths.target - Paths. Feb 13 19:17:07.635959 systemd[1583]: Reached target timers.target - Timers. Feb 13 19:17:07.637174 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:17:07.645738 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:17:07.645797 systemd[1583]: Reached target sockets.target - Sockets. Feb 13 19:17:07.645832 systemd[1583]: Reached target basic.target - Basic System. Feb 13 19:17:07.645860 systemd[1583]: Reached target default.target - Main User Target. Feb 13 19:17:07.646053 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:17:07.646468 systemd[1583]: Startup finished in 116ms. Feb 13 19:17:07.647493 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:17:07.715766 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:55864.service - OpenSSH per-connection server daemon (10.0.0.1:55864). Feb 13 19:17:07.758806 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 55864 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:07.760325 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:07.764138 systemd-logind[1464]: New session 2 of user core. Feb 13 19:17:07.772183 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:17:07.823798 sshd[1596]: Connection closed by 10.0.0.1 port 55864 Feb 13 19:17:07.823688 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:07.839935 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:55864.service: Deactivated successfully. Feb 13 19:17:07.841178 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:17:07.841864 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:17:07.852361 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:55870.service - OpenSSH per-connection server daemon (10.0.0.1:55870). Feb 13 19:17:07.853260 systemd-logind[1464]: Removed session 2. Feb 13 19:17:07.890116 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 55870 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:07.891179 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:07.895392 systemd-logind[1464]: New session 3 of user core. Feb 13 19:17:07.903205 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:17:07.951083 sshd[1604]: Connection closed by 10.0.0.1 port 55870 Feb 13 19:17:07.950952 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:07.966105 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:55870.service: Deactivated successfully. Feb 13 19:17:07.967590 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:17:07.968205 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:17:07.969894 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:55882.service - OpenSSH per-connection server daemon (10.0.0.1:55882). Feb 13 19:17:07.970743 systemd-logind[1464]: Removed session 3. Feb 13 19:17:08.011518 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 55882 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:08.012565 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:08.016404 systemd-logind[1464]: New session 4 of user core. Feb 13 19:17:08.028126 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:17:08.078684 sshd[1612]: Connection closed by 10.0.0.1 port 55882 Feb 13 19:17:08.078984 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:08.092929 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:55882.service: Deactivated successfully. Feb 13 19:17:08.094283 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:17:08.094860 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:17:08.104296 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:55884.service - OpenSSH per-connection server daemon (10.0.0.1:55884). Feb 13 19:17:08.105322 systemd-logind[1464]: Removed session 4. Feb 13 19:17:08.141971 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 55884 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:08.143056 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:08.147254 systemd-logind[1464]: New session 5 of user core. Feb 13 19:17:08.160166 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:17:08.218748 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:17:08.220847 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:08.239920 sudo[1621]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:08.241888 sshd[1620]: Connection closed by 10.0.0.1 port 55884 Feb 13 19:17:08.241686 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:08.256147 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:55884.service: Deactivated successfully. Feb 13 19:17:08.257636 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:17:08.259552 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:17:08.261309 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:55888.service - OpenSSH per-connection server daemon (10.0.0.1:55888). Feb 13 19:17:08.262022 systemd-logind[1464]: Removed session 5. Feb 13 19:17:08.303216 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 55888 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:08.304426 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:08.309935 systemd-logind[1464]: New session 6 of user core. Feb 13 19:17:08.325183 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:17:08.377674 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:17:08.377982 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:08.381044 sudo[1631]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:08.385642 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:17:08.385912 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:08.402413 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:08.425070 augenrules[1653]: No rules Feb 13 19:17:08.426226 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:08.427102 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:08.428377 sudo[1630]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:08.430063 sshd[1629]: Connection closed by 10.0.0.1 port 55888 Feb 13 19:17:08.429962 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:08.444814 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:55888.service: Deactivated successfully. Feb 13 19:17:08.446378 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:17:08.447156 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:17:08.462355 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Feb 13 19:17:08.463280 systemd-logind[1464]: Removed session 6. Feb 13 19:17:08.501518 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:17:08.502618 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:17:08.506361 systemd-logind[1464]: New session 7 of user core. Feb 13 19:17:08.518216 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:17:08.569552 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:17:08.569829 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:17:08.916226 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:17:08.916347 (dockerd)[1686]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:17:09.175472 dockerd[1686]: time="2025-02-13T19:17:09.175333270Z" level=info msg="Starting up" Feb 13 19:17:09.329404 dockerd[1686]: time="2025-02-13T19:17:09.329361287Z" level=info msg="Loading containers: start." Feb 13 19:17:09.465044 kernel: Initializing XFRM netlink socket Feb 13 19:17:09.538621 systemd-networkd[1402]: docker0: Link UP Feb 13 19:17:09.576316 dockerd[1686]: time="2025-02-13T19:17:09.576269823Z" level=info msg="Loading containers: done." Feb 13 19:17:09.594421 dockerd[1686]: time="2025-02-13T19:17:09.594370945Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:17:09.594581 dockerd[1686]: time="2025-02-13T19:17:09.594464412Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:17:09.594680 dockerd[1686]: time="2025-02-13T19:17:09.594653159Z" level=info msg="Daemon has completed initialization" Feb 13 19:17:09.621790 dockerd[1686]: time="2025-02-13T19:17:09.621693850Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:17:09.622134 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:17:10.145231 containerd[1487]: time="2025-02-13T19:17:10.145126648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:17:10.862380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141692916.mount: Deactivated successfully. Feb 13 19:17:12.682955 containerd[1487]: time="2025-02-13T19:17:12.682886290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:12.683839 containerd[1487]: time="2025-02-13T19:17:12.683598785Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 19:17:12.684553 containerd[1487]: time="2025-02-13T19:17:12.684497091Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:12.687366 containerd[1487]: time="2025-02-13T19:17:12.687335094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:12.688586 containerd[1487]: time="2025-02-13T19:17:12.688558317Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.543389419s" Feb 13 19:17:12.688658 containerd[1487]: time="2025-02-13T19:17:12.688590832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:17:12.689275 containerd[1487]: time="2025-02-13T19:17:12.689258593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:17:14.232429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:14.241177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:14.336193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:14.339605 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:17:14.376814 kubelet[1945]: E0213 19:17:14.376757 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:17:14.379652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:17:14.379796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:17:14.380171 systemd[1]: kubelet.service: Consumed 127ms CPU time, 97.7M memory peak. Feb 13 19:17:14.818163 containerd[1487]: time="2025-02-13T19:17:14.818103865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:14.818564 containerd[1487]: time="2025-02-13T19:17:14.818515015Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 19:17:14.819274 containerd[1487]: time="2025-02-13T19:17:14.819243692Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:14.822207 containerd[1487]: time="2025-02-13T19:17:14.822176145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:14.824406 containerd[1487]: time="2025-02-13T19:17:14.824365308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.135081404s" Feb 13 19:17:14.824406 containerd[1487]: time="2025-02-13T19:17:14.824399393Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:17:14.825109 containerd[1487]: time="2025-02-13T19:17:14.824945841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:17:16.565429 containerd[1487]: time="2025-02-13T19:17:16.565379883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:16.566344 containerd[1487]: time="2025-02-13T19:17:16.566100471Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 19:17:16.567034 containerd[1487]: time="2025-02-13T19:17:16.566982675Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:16.569989 containerd[1487]: time="2025-02-13T19:17:16.569957449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:16.571206 containerd[1487]: time="2025-02-13T19:17:16.571169983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.746185447s" Feb 13 19:17:16.571206 containerd[1487]: time="2025-02-13T19:17:16.571202194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:17:16.571648 containerd[1487]: time="2025-02-13T19:17:16.571624623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:17:17.475886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689571012.mount: Deactivated successfully. Feb 13 19:17:17.684657 containerd[1487]: time="2025-02-13T19:17:17.684600767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:17.685764 containerd[1487]: time="2025-02-13T19:17:17.685707615Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:17:17.686467 containerd[1487]: time="2025-02-13T19:17:17.686439458Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:17.688239 containerd[1487]: time="2025-02-13T19:17:17.688197470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:17.689003 containerd[1487]: time="2025-02-13T19:17:17.688970615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.117314267s" Feb 13 19:17:17.689041 containerd[1487]: time="2025-02-13T19:17:17.689016809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:17:17.689863 containerd[1487]: time="2025-02-13T19:17:17.689817903Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:17:18.348601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723337438.mount: Deactivated successfully. Feb 13 19:17:19.067490 containerd[1487]: time="2025-02-13T19:17:19.067316478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.068407 containerd[1487]: time="2025-02-13T19:17:19.068361691Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:17:19.069022 containerd[1487]: time="2025-02-13T19:17:19.068974888Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.072467 containerd[1487]: time="2025-02-13T19:17:19.072436463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.073601 containerd[1487]: time="2025-02-13T19:17:19.073565274Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.38371305s" Feb 13 19:17:19.073601 containerd[1487]: time="2025-02-13T19:17:19.073600099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:17:19.074233 containerd[1487]: time="2025-02-13T19:17:19.074043015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:17:19.571752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79872955.mount: Deactivated successfully. Feb 13 19:17:19.576198 containerd[1487]: time="2025-02-13T19:17:19.576140907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.577070 containerd[1487]: time="2025-02-13T19:17:19.577017001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:17:19.577689 containerd[1487]: time="2025-02-13T19:17:19.577654404Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.580096 containerd[1487]: time="2025-02-13T19:17:19.580060466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:19.580930 containerd[1487]: time="2025-02-13T19:17:19.580899450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 506.827339ms" Feb 13 19:17:19.580965 containerd[1487]: time="2025-02-13T19:17:19.580928504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:17:19.581561 containerd[1487]: time="2025-02-13T19:17:19.581536893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:17:20.140725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865182454.mount: Deactivated successfully. Feb 13 19:17:23.606286 containerd[1487]: time="2025-02-13T19:17:23.606238664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:23.607296 containerd[1487]: time="2025-02-13T19:17:23.606716553Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 19:17:23.608033 containerd[1487]: time="2025-02-13T19:17:23.607950720Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:23.612194 containerd[1487]: time="2025-02-13T19:17:23.612153414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:23.613312 containerd[1487]: time="2025-02-13T19:17:23.613258838Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.031690372s" Feb 13 19:17:23.613312 containerd[1487]: time="2025-02-13T19:17:23.613293957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:17:24.482382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:17:24.491220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:24.579806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:24.583753 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:17:24.630960 kubelet[2102]: E0213 19:17:24.630913 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:17:24.633449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:17:24.633598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:17:24.633906 systemd[1]: kubelet.service: Consumed 122ms CPU time, 97M memory peak. Feb 13 19:17:29.055023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:29.055158 systemd[1]: kubelet.service: Consumed 122ms CPU time, 97M memory peak. Feb 13 19:17:29.069351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:29.089354 systemd[1]: Reload requested from client PID 2117 ('systemctl') (unit session-7.scope)... Feb 13 19:17:29.089370 systemd[1]: Reloading... Feb 13 19:17:29.158077 zram_generator::config[2164]: No configuration found. Feb 13 19:17:29.392201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:29.463378 systemd[1]: Reloading finished in 373 ms. Feb 13 19:17:29.498270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:29.500407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:29.502247 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:17:29.503244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:29.503285 systemd[1]: kubelet.service: Consumed 76ms CPU time, 82.4M memory peak. Feb 13 19:17:29.506161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:29.596638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:29.600738 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:17:29.635089 kubelet[2208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:29.635089 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:17:29.635089 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:29.635390 kubelet[2208]: I0213 19:17:29.635195 2208 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:17:30.290642 kubelet[2208]: I0213 19:17:30.290593 2208 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:17:30.290642 kubelet[2208]: I0213 19:17:30.290629 2208 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:17:30.290907 kubelet[2208]: I0213 19:17:30.290880 2208 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:17:30.317464 kubelet[2208]: E0213 19:17:30.317425 2208 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:30.318533 kubelet[2208]: I0213 19:17:30.318505 2208 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:30.327646 kubelet[2208]: E0213 19:17:30.327609 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:17:30.327646 kubelet[2208]: I0213 19:17:30.327643 2208 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:17:30.331210 kubelet[2208]: I0213 19:17:30.331175 2208 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:17:30.331892 kubelet[2208]: I0213 19:17:30.331865 2208 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:17:30.332049 kubelet[2208]: I0213 19:17:30.332012 2208 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:17:30.332207 kubelet[2208]: I0213 19:17:30.332043 2208 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:17:30.332358 kubelet[2208]: I0213 19:17:30.332338 2208 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:17:30.332358 kubelet[2208]: I0213 19:17:30.332351 2208 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:17:30.332545 kubelet[2208]: I0213 19:17:30.332526 2208 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:30.334297 kubelet[2208]: I0213 19:17:30.334131 2208 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:17:30.334297 kubelet[2208]: I0213 19:17:30.334159 2208 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:17:30.334297 kubelet[2208]: I0213 19:17:30.334183 2208 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:17:30.334297 kubelet[2208]: I0213 19:17:30.334193 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:17:30.335863 kubelet[2208]: I0213 19:17:30.335829 2208 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:17:30.338293 kubelet[2208]: W0213 19:17:30.338249 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:30.338494 kubelet[2208]: E0213 19:17:30.338405 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:30.338494 kubelet[2208]: I0213 19:17:30.338363 2208 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:17:30.338767 kubelet[2208]: W0213 19:17:30.338725 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:30.338803 kubelet[2208]: E0213 19:17:30.338779 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:30.341465 kubelet[2208]: W0213 19:17:30.341157 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:17:30.342029 kubelet[2208]: I0213 19:17:30.342012 2208 server.go:1269] "Started kubelet" Feb 13 19:17:30.342996 kubelet[2208]: I0213 19:17:30.342262 2208 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:17:30.345158 kubelet[2208]: I0213 19:17:30.345094 2208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:17:30.345515 kubelet[2208]: I0213 19:17:30.345319 2208 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:17:30.345515 kubelet[2208]: I0213 19:17:30.345371 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:17:30.346636 kubelet[2208]: I0213 19:17:30.346222 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:17:30.347414 kubelet[2208]: I0213 19:17:30.347396 2208 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:17:30.348846 kubelet[2208]: E0213 19:17:30.348285 2208 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:30.349497 kubelet[2208]: E0213 19:17:30.349452 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Feb 13 19:17:30.349617 kubelet[2208]: I0213 19:17:30.349602 2208 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:17:30.349674 kubelet[2208]: I0213 19:17:30.347668 2208 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:17:30.349905 kubelet[2208]: I0213 19:17:30.349883 2208 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:17:30.350377 kubelet[2208]: I0213 19:17:30.349696 2208 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:17:30.350457 kubelet[2208]: I0213 19:17:30.350437 2208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:17:30.350570 kubelet[2208]: W0213 19:17:30.350137 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:30.350613 kubelet[2208]: E0213 19:17:30.350589 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:30.351075 kubelet[2208]: E0213 19:17:30.351033 2208 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:17:30.353064 kubelet[2208]: E0213 19:17:30.349977 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823da9d539274da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:17:30.341971162 +0000 UTC m=+0.738152177,LastTimestamp:2025-02-13 19:17:30.341971162 +0000 UTC m=+0.738152177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:17:30.354194 kubelet[2208]: I0213 19:17:30.354110 2208 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:17:30.361410 kubelet[2208]: I0213 19:17:30.361356 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:17:30.362452 kubelet[2208]: I0213 19:17:30.362402 2208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:17:30.362452 kubelet[2208]: I0213 19:17:30.362435 2208 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:17:30.362452 kubelet[2208]: I0213 19:17:30.362457 2208 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:17:30.362573 kubelet[2208]: E0213 19:17:30.362498 2208 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:17:30.366070 kubelet[2208]: W0213 19:17:30.365929 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:30.366070 kubelet[2208]: E0213 19:17:30.365987 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:30.367079 kubelet[2208]: I0213 19:17:30.367059 2208 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:17:30.367079 kubelet[2208]: I0213 19:17:30.367077 2208 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:17:30.367169 kubelet[2208]: I0213 19:17:30.367095 2208 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:30.436975 kubelet[2208]: I0213 19:17:30.436927 2208 policy_none.go:49] "None policy: Start" Feb 13 19:17:30.437850 kubelet[2208]: I0213 19:17:30.437778 2208 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:17:30.437850 kubelet[2208]: I0213 19:17:30.437847 2208 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:17:30.443951 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:17:30.450023 kubelet[2208]: E0213 19:17:30.449982 2208 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:30.454808 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:17:30.457513 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:17:30.463285 kubelet[2208]: E0213 19:17:30.463245 2208 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:17:30.469021 kubelet[2208]: I0213 19:17:30.468801 2208 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:17:30.469108 kubelet[2208]: I0213 19:17:30.469050 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:17:30.469108 kubelet[2208]: I0213 19:17:30.469064 2208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:17:30.469733 kubelet[2208]: I0213 19:17:30.469585 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:17:30.470613 kubelet[2208]: E0213 19:17:30.470588 2208 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:17:30.550271 kubelet[2208]: E0213 19:17:30.550156 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Feb 13 19:17:30.570239 kubelet[2208]: I0213 19:17:30.570192 2208 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:30.570599 kubelet[2208]: E0213 19:17:30.570561 2208 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Feb 13 19:17:30.674490 systemd[1]: Created slice kubepods-burstable-pod820e9e248f573509a897237c267d3ba8.slice - libcontainer container kubepods-burstable-pod820e9e248f573509a897237c267d3ba8.slice. Feb 13 19:17:30.689332 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:17:30.692364 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:17:30.751032 kubelet[2208]: I0213 19:17:30.750945 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:30.751032 kubelet[2208]: I0213 19:17:30.751013 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:30.751032 kubelet[2208]: I0213 19:17:30.751039 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:30.751442 kubelet[2208]: I0213 19:17:30.751061 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:30.751442 kubelet[2208]: I0213 19:17:30.751077 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:17:30.751442 kubelet[2208]: I0213 19:17:30.751094 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:30.751442 kubelet[2208]: I0213 19:17:30.751108 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:30.751442 kubelet[2208]: I0213 19:17:30.751123 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:30.751556 kubelet[2208]: I0213 19:17:30.751136 2208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:30.772168 kubelet[2208]: I0213 19:17:30.772113 2208 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:30.772510 kubelet[2208]: E0213 19:17:30.772476 2208 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Feb 13 19:17:30.951071 kubelet[2208]: E0213 19:17:30.950919 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Feb 13 19:17:30.988353 containerd[1487]: time="2025-02-13T19:17:30.988235914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:820e9e248f573509a897237c267d3ba8,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:30.992033 containerd[1487]: time="2025-02-13T19:17:30.991947289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:30.994852 containerd[1487]: time="2025-02-13T19:17:30.994815017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:31.173898 kubelet[2208]: I0213 19:17:31.173858 2208 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:31.174258 kubelet[2208]: E0213 19:17:31.174227 2208 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Feb 13 19:17:31.197647 kubelet[2208]: W0213 19:17:31.197612 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:31.197728 kubelet[2208]: E0213 19:17:31.197655 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:31.381297 kubelet[2208]: W0213 19:17:31.381225 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:31.381297 kubelet[2208]: E0213 19:17:31.381299 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:31.450065 kubelet[2208]: W0213 19:17:31.449979 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:31.450119 kubelet[2208]: E0213 19:17:31.450072 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:31.469317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520752818.mount: Deactivated successfully. Feb 13 19:17:31.474760 containerd[1487]: time="2025-02-13T19:17:31.474710052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:31.476596 containerd[1487]: time="2025-02-13T19:17:31.476552553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:17:31.477412 containerd[1487]: time="2025-02-13T19:17:31.477371785Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:31.478807 containerd[1487]: time="2025-02-13T19:17:31.478773359Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:31.479171 containerd[1487]: time="2025-02-13T19:17:31.479130775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:17:31.480482 containerd[1487]: time="2025-02-13T19:17:31.480442795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:17:31.480557 containerd[1487]: time="2025-02-13T19:17:31.480524626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:31.482382 containerd[1487]: time="2025-02-13T19:17:31.482343959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:17:31.485049 containerd[1487]: time="2025-02-13T19:17:31.485008493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.691984ms" Feb 13 19:17:31.486768 containerd[1487]: time="2025-02-13T19:17:31.486730189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.701265ms" Feb 13 19:17:31.490171 containerd[1487]: time="2025-02-13T19:17:31.490137086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.265125ms" Feb 13 19:17:31.608808 containerd[1487]: time="2025-02-13T19:17:31.608675784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:31.608808 containerd[1487]: time="2025-02-13T19:17:31.608747092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:31.608808 containerd[1487]: time="2025-02-13T19:17:31.608763258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.609080 containerd[1487]: time="2025-02-13T19:17:31.608841768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.609806 containerd[1487]: time="2025-02-13T19:17:31.609731026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:31.609806 containerd[1487]: time="2025-02-13T19:17:31.609782086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:31.609895 containerd[1487]: time="2025-02-13T19:17:31.609797131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.609895 containerd[1487]: time="2025-02-13T19:17:31.609865518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.610029 containerd[1487]: time="2025-02-13T19:17:31.609857474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:31.610029 containerd[1487]: time="2025-02-13T19:17:31.609905093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:31.610029 containerd[1487]: time="2025-02-13T19:17:31.609919938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.610550 containerd[1487]: time="2025-02-13T19:17:31.609983042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:31.630203 systemd[1]: Started cri-containerd-2ed7322fc8213781387d1ea160f20291c02f561f40e5a19ea22dc7739ea8cd91.scope - libcontainer container 2ed7322fc8213781387d1ea160f20291c02f561f40e5a19ea22dc7739ea8cd91. Feb 13 19:17:31.631375 systemd[1]: Started cri-containerd-76f5bbedf281b00e2daf83495f60dd18c8fcfdb22241070419e0609c5f0b7bb0.scope - libcontainer container 76f5bbedf281b00e2daf83495f60dd18c8fcfdb22241070419e0609c5f0b7bb0. Feb 13 19:17:31.632325 systemd[1]: Started cri-containerd-f1ddc48b59cbca8d44a5ec9d7f18e35d39261e94529e34cfaf912e3749fcd93d.scope - libcontainer container f1ddc48b59cbca8d44a5ec9d7f18e35d39261e94529e34cfaf912e3749fcd93d. Feb 13 19:17:31.659862 containerd[1487]: time="2025-02-13T19:17:31.659541154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:820e9e248f573509a897237c267d3ba8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ed7322fc8213781387d1ea160f20291c02f561f40e5a19ea22dc7739ea8cd91\"" Feb 13 19:17:31.663283 containerd[1487]: time="2025-02-13T19:17:31.663234880Z" level=info msg="CreateContainer within sandbox \"2ed7322fc8213781387d1ea160f20291c02f561f40e5a19ea22dc7739ea8cd91\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:17:31.663452 containerd[1487]: time="2025-02-13T19:17:31.663427753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1ddc48b59cbca8d44a5ec9d7f18e35d39261e94529e34cfaf912e3749fcd93d\"" Feb 13 19:17:31.665635 containerd[1487]: time="2025-02-13T19:17:31.665550842Z" level=info msg="CreateContainer within sandbox \"f1ddc48b59cbca8d44a5ec9d7f18e35d39261e94529e34cfaf912e3749fcd93d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:17:31.670563 containerd[1487]: time="2025-02-13T19:17:31.670533339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f5bbedf281b00e2daf83495f60dd18c8fcfdb22241070419e0609c5f0b7bb0\"" Feb 13 19:17:31.672940 containerd[1487]: time="2025-02-13T19:17:31.672783876Z" level=info msg="CreateContainer within sandbox \"76f5bbedf281b00e2daf83495f60dd18c8fcfdb22241070419e0609c5f0b7bb0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:17:31.683067 containerd[1487]: time="2025-02-13T19:17:31.683005689Z" level=info msg="CreateContainer within sandbox \"2ed7322fc8213781387d1ea160f20291c02f561f40e5a19ea22dc7739ea8cd91\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7fdf08d5e33c2c74d39871b9c0c5ae5c755d5e330bf2c52c3d1b010acec8afc6\"" Feb 13 19:17:31.683538 containerd[1487]: time="2025-02-13T19:17:31.683512722Z" level=info msg="StartContainer for \"7fdf08d5e33c2c74d39871b9c0c5ae5c755d5e330bf2c52c3d1b010acec8afc6\"" Feb 13 19:17:31.684136 containerd[1487]: time="2025-02-13T19:17:31.684107268Z" level=info msg="CreateContainer within sandbox \"f1ddc48b59cbca8d44a5ec9d7f18e35d39261e94529e34cfaf912e3749fcd93d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c2999fcd9d4cec1d061c84a750d5f320f7d5767a0da2ea0494e83bdc6aefaa7\"" Feb 13 19:17:31.684499 containerd[1487]: time="2025-02-13T19:17:31.684478729Z" level=info msg="StartContainer for \"0c2999fcd9d4cec1d061c84a750d5f320f7d5767a0da2ea0494e83bdc6aefaa7\"" Feb 13 19:17:31.686529 containerd[1487]: time="2025-02-13T19:17:31.686493737Z" level=info msg="CreateContainer within sandbox \"76f5bbedf281b00e2daf83495f60dd18c8fcfdb22241070419e0609c5f0b7bb0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07b0b72c132a7106b049597eae32dda8f14d4d106100808a78d5069732eca783\"" Feb 13 19:17:31.687305 containerd[1487]: time="2025-02-13T19:17:31.687074278Z" level=info msg="StartContainer for \"07b0b72c132a7106b049597eae32dda8f14d4d106100808a78d5069732eca783\"" Feb 13 19:17:31.698132 kubelet[2208]: W0213 19:17:31.697916 2208 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Feb 13 19:17:31.698252 kubelet[2208]: E0213 19:17:31.698153 2208 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:17:31.713177 systemd[1]: Started cri-containerd-0c2999fcd9d4cec1d061c84a750d5f320f7d5767a0da2ea0494e83bdc6aefaa7.scope - libcontainer container 0c2999fcd9d4cec1d061c84a750d5f320f7d5767a0da2ea0494e83bdc6aefaa7. Feb 13 19:17:31.716671 systemd[1]: Started cri-containerd-07b0b72c132a7106b049597eae32dda8f14d4d106100808a78d5069732eca783.scope - libcontainer container 07b0b72c132a7106b049597eae32dda8f14d4d106100808a78d5069732eca783. Feb 13 19:17:31.717708 systemd[1]: Started cri-containerd-7fdf08d5e33c2c74d39871b9c0c5ae5c755d5e330bf2c52c3d1b010acec8afc6.scope - libcontainer container 7fdf08d5e33c2c74d39871b9c0c5ae5c755d5e330bf2c52c3d1b010acec8afc6. Feb 13 19:17:31.752921 kubelet[2208]: E0213 19:17:31.751482 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Feb 13 19:17:31.767104 containerd[1487]: time="2025-02-13T19:17:31.767056974Z" level=info msg="StartContainer for \"7fdf08d5e33c2c74d39871b9c0c5ae5c755d5e330bf2c52c3d1b010acec8afc6\" returns successfully" Feb 13 19:17:31.767328 containerd[1487]: time="2025-02-13T19:17:31.767113156Z" level=info msg="StartContainer for \"07b0b72c132a7106b049597eae32dda8f14d4d106100808a78d5069732eca783\" returns successfully" Feb 13 19:17:31.767449 containerd[1487]: time="2025-02-13T19:17:31.767116637Z" level=info msg="StartContainer for \"0c2999fcd9d4cec1d061c84a750d5f320f7d5767a0da2ea0494e83bdc6aefaa7\" returns successfully" Feb 13 19:17:31.977605 kubelet[2208]: I0213 19:17:31.977471 2208 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:33.604575 kubelet[2208]: E0213 19:17:33.604517 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:17:33.774266 kubelet[2208]: I0213 19:17:33.774220 2208 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:17:34.337346 kubelet[2208]: I0213 19:17:34.337261 2208 apiserver.go:52] "Watching apiserver" Feb 13 19:17:34.350035 kubelet[2208]: I0213 19:17:34.350003 2208 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:17:35.569946 systemd[1]: Reload requested from client PID 2487 ('systemctl') (unit session-7.scope)... Feb 13 19:17:35.569961 systemd[1]: Reloading... Feb 13 19:17:35.641067 zram_generator::config[2534]: No configuration found. Feb 13 19:17:35.716762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:35.798804 systemd[1]: Reloading finished in 228 ms. Feb 13 19:17:35.820168 kubelet[2208]: I0213 19:17:35.820048 2208 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:35.820347 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:35.837419 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:17:35.837697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:35.837760 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 117.9M memory peak. Feb 13 19:17:35.845220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:35.944020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:17:35.947964 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:17:35.983064 kubelet[2573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:35.983064 kubelet[2573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:17:35.983064 kubelet[2573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:17:35.983387 kubelet[2573]: I0213 19:17:35.983140 2573 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:17:35.989871 kubelet[2573]: I0213 19:17:35.989837 2573 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:17:35.990586 kubelet[2573]: I0213 19:17:35.989964 2573 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:17:35.990586 kubelet[2573]: I0213 19:17:35.990210 2573 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:17:35.991795 kubelet[2573]: I0213 19:17:35.991773 2573 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:17:35.993875 kubelet[2573]: I0213 19:17:35.993837 2573 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:17:35.998609 kubelet[2573]: E0213 19:17:35.998578 2573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:17:35.998609 kubelet[2573]: I0213 19:17:35.998607 2573 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:17:36.000959 kubelet[2573]: I0213 19:17:36.000942 2573 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:17:36.001072 kubelet[2573]: I0213 19:17:36.001058 2573 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:17:36.001173 kubelet[2573]: I0213 19:17:36.001150 2573 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:17:36.001319 kubelet[2573]: I0213 19:17:36.001173 2573 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:17:36.001393 kubelet[2573]: I0213 19:17:36.001329 2573 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:17:36.001393 kubelet[2573]: I0213 19:17:36.001338 2573 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:17:36.001393 kubelet[2573]: I0213 19:17:36.001365 2573 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:36.001776 kubelet[2573]: I0213 19:17:36.001457 2573 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:17:36.001776 kubelet[2573]: I0213 19:17:36.001469 2573 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:17:36.001776 kubelet[2573]: I0213 19:17:36.001487 2573 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:17:36.001776 kubelet[2573]: I0213 19:17:36.001496 2573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:17:36.002293 kubelet[2573]: I0213 19:17:36.002271 2573 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:17:36.003093 kubelet[2573]: I0213 19:17:36.002826 2573 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:17:36.007004 kubelet[2573]: I0213 19:17:36.004932 2573 server.go:1269] "Started kubelet" Feb 13 19:17:36.007136 kubelet[2573]: I0213 19:17:36.005867 2573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:17:36.008268 kubelet[2573]: I0213 19:17:36.005920 2573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:17:36.009187 kubelet[2573]: I0213 19:17:36.008783 2573 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:17:36.009187 kubelet[2573]: I0213 19:17:36.006475 2573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:17:36.009187 kubelet[2573]: I0213 19:17:36.006378 2573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:17:36.010270 kubelet[2573]: I0213 19:17:36.010046 2573 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:17:36.011496 kubelet[2573]: I0213 19:17:36.011448 2573 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:17:36.011677 kubelet[2573]: I0213 19:17:36.011660 2573 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:17:36.011833 kubelet[2573]: I0213 19:17:36.011822 2573 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:17:36.012630 kubelet[2573]: I0213 19:17:36.012607 2573 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:17:36.012786 kubelet[2573]: I0213 19:17:36.012768 2573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:17:36.013041 kubelet[2573]: E0213 19:17:36.013025 2573 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:17:36.014459 kubelet[2573]: E0213 19:17:36.014439 2573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:17:36.023595 kubelet[2573]: I0213 19:17:36.023538 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:17:36.025105 kubelet[2573]: I0213 19:17:36.025076 2573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:17:36.026256 kubelet[2573]: I0213 19:17:36.026164 2573 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:17:36.029054 kubelet[2573]: I0213 19:17:36.029026 2573 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:17:36.029126 kubelet[2573]: E0213 19:17:36.029071 2573 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:17:36.032035 kubelet[2573]: I0213 19:17:36.032010 2573 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:17:36.060364 kubelet[2573]: I0213 19:17:36.060344 2573 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:17:36.060364 kubelet[2573]: I0213 19:17:36.060358 2573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:17:36.060498 kubelet[2573]: I0213 19:17:36.060376 2573 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:17:36.060521 kubelet[2573]: I0213 19:17:36.060503 2573 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:17:36.060548 kubelet[2573]: I0213 19:17:36.060513 2573 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:17:36.060548 kubelet[2573]: I0213 19:17:36.060530 2573 policy_none.go:49] "None policy: Start" Feb 13 19:17:36.061024 kubelet[2573]: I0213 19:17:36.061008 2573 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:17:36.061024 kubelet[2573]: I0213 19:17:36.061027 2573 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:17:36.061198 kubelet[2573]: I0213 19:17:36.061183 2573 state_mem.go:75] "Updated machine memory state" Feb 13 19:17:36.064618 kubelet[2573]: I0213 19:17:36.064600 2573 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:17:36.064981 kubelet[2573]: I0213 19:17:36.064747 2573 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:17:36.064981 kubelet[2573]: I0213 19:17:36.064764 2573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:17:36.064981 kubelet[2573]: I0213 19:17:36.064941 2573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:17:36.166571 kubelet[2573]: I0213 19:17:36.166470 2573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:17:36.174724 kubelet[2573]: I0213 19:17:36.174682 2573 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:17:36.174826 kubelet[2573]: I0213 19:17:36.174775 2573 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:17:36.313722 kubelet[2573]: I0213 19:17:36.313681 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:36.313722 kubelet[2573]: I0213 19:17:36.313723 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:36.313887 kubelet[2573]: I0213 19:17:36.313744 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:36.313887 kubelet[2573]: I0213 19:17:36.313763 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:36.313887 kubelet[2573]: I0213 19:17:36.313780 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:17:36.313887 kubelet[2573]: I0213 19:17:36.313793 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/820e9e248f573509a897237c267d3ba8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"820e9e248f573509a897237c267d3ba8\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:36.313887 kubelet[2573]: I0213 19:17:36.313806 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:36.314018 kubelet[2573]: I0213 19:17:36.313820 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:36.314018 kubelet[2573]: I0213 19:17:36.313835 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:17:36.573461 sudo[2606]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:17:36.573761 sudo[2606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:17:37.002163 kubelet[2573]: I0213 19:17:37.002140 2573 apiserver.go:52] "Watching apiserver" Feb 13 19:17:37.012348 kubelet[2573]: I0213 19:17:37.012230 2573 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:17:37.013253 sudo[2606]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:37.047095 kubelet[2573]: E0213 19:17:37.047060 2573 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:17:37.068879 kubelet[2573]: I0213 19:17:37.068806 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.0687797589999999 podStartE2EDuration="1.068779759s" podCreationTimestamp="2025-02-13 19:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:37.060638373 +0000 UTC m=+1.109733300" watchObservedRunningTime="2025-02-13 19:17:37.068779759 +0000 UTC m=+1.117874686" Feb 13 19:17:37.069064 kubelet[2573]: I0213 19:17:37.068933 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.06892855 podStartE2EDuration="1.06892855s" podCreationTimestamp="2025-02-13 19:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:37.068598561 +0000 UTC m=+1.117693488" watchObservedRunningTime="2025-02-13 19:17:37.06892855 +0000 UTC m=+1.118023477" Feb 13 19:17:37.093951 kubelet[2573]: I0213 19:17:37.092302 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.092286926 podStartE2EDuration="1.092286926s" podCreationTimestamp="2025-02-13 19:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:37.07570193 +0000 UTC m=+1.124796857" watchObservedRunningTime="2025-02-13 19:17:37.092286926 +0000 UTC m=+1.141381853" Feb 13 19:17:39.480153 sudo[1665]: pam_unix(sudo:session): session closed for user root Feb 13 19:17:39.481195 sshd[1664]: Connection closed by 10.0.0.1 port 55898 Feb 13 19:17:39.481527 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Feb 13 19:17:39.484584 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:55898.service: Deactivated successfully. Feb 13 19:17:39.486557 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:17:39.486751 systemd[1]: session-7.scope: Consumed 8.443s CPU time, 262M memory peak. Feb 13 19:17:39.487680 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:17:39.489317 systemd-logind[1464]: Removed session 7. Feb 13 19:17:40.486605 kubelet[2573]: I0213 19:17:40.486556 2573 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:17:40.487832 kubelet[2573]: I0213 19:17:40.487574 2573 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:17:40.487864 containerd[1487]: time="2025-02-13T19:17:40.486882170Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:17:41.273173 systemd[1]: Created slice kubepods-besteffort-pod938df8a9_ab56_43c0_8966_591fd122bdc8.slice - libcontainer container kubepods-besteffort-pod938df8a9_ab56_43c0_8966_591fd122bdc8.slice. Feb 13 19:17:41.289066 systemd[1]: Created slice kubepods-burstable-pod56d30a0c_c539_41dd_80c1_3dd9cb1a2008.slice - libcontainer container kubepods-burstable-pod56d30a0c_c539_41dd_80c1_3dd9cb1a2008.slice. Feb 13 19:17:41.345220 kubelet[2573]: I0213 19:17:41.345169 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7j5\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-kube-api-access-bg7j5\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345220 kubelet[2573]: I0213 19:17:41.345221 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938df8a9-ab56-43c0-8966-591fd122bdc8-xtables-lock\") pod \"kube-proxy-w9trr\" (UID: \"938df8a9-ab56-43c0-8966-591fd122bdc8\") " pod="kube-system/kube-proxy-w9trr" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345241 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-bpf-maps\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345266 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-cgroup\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345301 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938df8a9-ab56-43c0-8966-591fd122bdc8-lib-modules\") pod \"kube-proxy-w9trr\" (UID: \"938df8a9-ab56-43c0-8966-591fd122bdc8\") " pod="kube-system/kube-proxy-w9trr" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345334 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-etc-cni-netd\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345357 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-lib-modules\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345402 kubelet[2573]: I0213 19:17:41.345376 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nhb\" (UniqueName: \"kubernetes.io/projected/938df8a9-ab56-43c0-8966-591fd122bdc8-kube-api-access-v5nhb\") pod \"kube-proxy-w9trr\" (UID: \"938df8a9-ab56-43c0-8966-591fd122bdc8\") " pod="kube-system/kube-proxy-w9trr" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345391 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-run\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345449 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hostproc\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345489 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-xtables-lock\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345505 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-clustermesh-secrets\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345527 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-net\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345545 kubelet[2573]: I0213 19:17:41.345543 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hubble-tls\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345689 kubelet[2573]: I0213 19:17:41.345561 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-config-path\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345689 kubelet[2573]: I0213 19:17:41.345587 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-kernel\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.345689 kubelet[2573]: I0213 19:17:41.345626 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/938df8a9-ab56-43c0-8966-591fd122bdc8-kube-proxy\") pod \"kube-proxy-w9trr\" (UID: \"938df8a9-ab56-43c0-8966-591fd122bdc8\") " pod="kube-system/kube-proxy-w9trr" Feb 13 19:17:41.345689 kubelet[2573]: I0213 19:17:41.345644 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cni-path\") pod \"cilium-xcsnw\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " pod="kube-system/cilium-xcsnw" Feb 13 19:17:41.561896 systemd[1]: Created slice kubepods-besteffort-podc0179af5_04ad_4b1f_9791_c02326020bf8.slice - libcontainer container kubepods-besteffort-podc0179af5_04ad_4b1f_9791_c02326020bf8.slice. Feb 13 19:17:41.587170 containerd[1487]: time="2025-02-13T19:17:41.587124002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9trr,Uid:938df8a9-ab56-43c0-8966-591fd122bdc8,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:41.594066 containerd[1487]: time="2025-02-13T19:17:41.594018522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcsnw,Uid:56d30a0c-c539-41dd-80c1-3dd9cb1a2008,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:41.649440 kubelet[2573]: I0213 19:17:41.649392 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29v2h\" (UniqueName: \"kubernetes.io/projected/c0179af5-04ad-4b1f-9791-c02326020bf8-kube-api-access-29v2h\") pod \"cilium-operator-5d85765b45-stq77\" (UID: \"c0179af5-04ad-4b1f-9791-c02326020bf8\") " pod="kube-system/cilium-operator-5d85765b45-stq77" Feb 13 19:17:41.649440 kubelet[2573]: I0213 19:17:41.649433 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0179af5-04ad-4b1f-9791-c02326020bf8-cilium-config-path\") pod \"cilium-operator-5d85765b45-stq77\" (UID: \"c0179af5-04ad-4b1f-9791-c02326020bf8\") " pod="kube-system/cilium-operator-5d85765b45-stq77" Feb 13 19:17:41.650853 containerd[1487]: time="2025-02-13T19:17:41.650757745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:41.650853 containerd[1487]: time="2025-02-13T19:17:41.650813794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:41.650853 containerd[1487]: time="2025-02-13T19:17:41.650829397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.651506 containerd[1487]: time="2025-02-13T19:17:41.651394092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.652504 containerd[1487]: time="2025-02-13T19:17:41.652405542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:41.652568 containerd[1487]: time="2025-02-13T19:17:41.652528603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:41.652669 containerd[1487]: time="2025-02-13T19:17:41.652570130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.652788 containerd[1487]: time="2025-02-13T19:17:41.652744399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.673228 systemd[1]: Started cri-containerd-da5dd0007f059d2eb2ad9143c25b63f5166cba2364bcf6e43bf01ffc9946f0f3.scope - libcontainer container da5dd0007f059d2eb2ad9143c25b63f5166cba2364bcf6e43bf01ffc9946f0f3. Feb 13 19:17:41.676325 systemd[1]: Started cri-containerd-acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c.scope - libcontainer container acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c. Feb 13 19:17:41.695862 containerd[1487]: time="2025-02-13T19:17:41.695800361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w9trr,Uid:938df8a9-ab56-43c0-8966-591fd122bdc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"da5dd0007f059d2eb2ad9143c25b63f5166cba2364bcf6e43bf01ffc9946f0f3\"" Feb 13 19:17:41.700164 containerd[1487]: time="2025-02-13T19:17:41.700122408Z" level=info msg="CreateContainer within sandbox \"da5dd0007f059d2eb2ad9143c25b63f5166cba2364bcf6e43bf01ffc9946f0f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:17:41.704144 containerd[1487]: time="2025-02-13T19:17:41.704063791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcsnw,Uid:56d30a0c-c539-41dd-80c1-3dd9cb1a2008,Namespace:kube-system,Attempt:0,} returns sandbox id \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\"" Feb 13 19:17:41.705936 containerd[1487]: time="2025-02-13T19:17:41.705875655Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:17:41.717374 containerd[1487]: time="2025-02-13T19:17:41.717333383Z" level=info msg="CreateContainer within sandbox \"da5dd0007f059d2eb2ad9143c25b63f5166cba2364bcf6e43bf01ffc9946f0f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5dfe0b182d245beee15a67da4647ef2dda1ffcb0e2eccbe581b7f3fec036210\"" Feb 13 19:17:41.718399 containerd[1487]: time="2025-02-13T19:17:41.717972490Z" level=info msg="StartContainer for \"c5dfe0b182d245beee15a67da4647ef2dda1ffcb0e2eccbe581b7f3fec036210\"" Feb 13 19:17:41.746212 systemd[1]: Started cri-containerd-c5dfe0b182d245beee15a67da4647ef2dda1ffcb0e2eccbe581b7f3fec036210.scope - libcontainer container c5dfe0b182d245beee15a67da4647ef2dda1ffcb0e2eccbe581b7f3fec036210. Feb 13 19:17:41.779161 containerd[1487]: time="2025-02-13T19:17:41.775365623Z" level=info msg="StartContainer for \"c5dfe0b182d245beee15a67da4647ef2dda1ffcb0e2eccbe581b7f3fec036210\" returns successfully" Feb 13 19:17:41.868157 containerd[1487]: time="2025-02-13T19:17:41.866939345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-stq77,Uid:c0179af5-04ad-4b1f-9791-c02326020bf8,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:41.889798 containerd[1487]: time="2025-02-13T19:17:41.889331672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:41.889798 containerd[1487]: time="2025-02-13T19:17:41.889384441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:41.889798 containerd[1487]: time="2025-02-13T19:17:41.889399523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.889798 containerd[1487]: time="2025-02-13T19:17:41.889471335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:41.916197 systemd[1]: Started cri-containerd-c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8.scope - libcontainer container c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8. Feb 13 19:17:41.946827 containerd[1487]: time="2025-02-13T19:17:41.946782695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-stq77,Uid:c0179af5-04ad-4b1f-9791-c02326020bf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\"" Feb 13 19:17:42.062922 kubelet[2573]: I0213 19:17:42.062857 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9trr" podStartSLOduration=1.062837111 podStartE2EDuration="1.062837111s" podCreationTimestamp="2025-02-13 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:17:42.062652922 +0000 UTC m=+6.111747849" watchObservedRunningTime="2025-02-13 19:17:42.062837111 +0000 UTC m=+6.111932038" Feb 13 19:17:45.837311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099188279.mount: Deactivated successfully. Feb 13 19:17:47.129108 update_engine[1468]: I20250213 19:17:47.129042 1468 update_attempter.cc:509] Updating boot flags... Feb 13 19:17:47.161159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2971) Feb 13 19:17:47.211066 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2970) Feb 13 19:17:48.288342 containerd[1487]: time="2025-02-13T19:17:48.288268386Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:48.288740 containerd[1487]: time="2025-02-13T19:17:48.288721839Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:17:48.290046 containerd[1487]: time="2025-02-13T19:17:48.290020511Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:48.292121 containerd[1487]: time="2025-02-13T19:17:48.292086273Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.586170931s" Feb 13 19:17:48.292219 containerd[1487]: time="2025-02-13T19:17:48.292120237Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:17:48.298928 containerd[1487]: time="2025-02-13T19:17:48.298898350Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:17:48.299796 containerd[1487]: time="2025-02-13T19:17:48.299766852Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:17:48.337221 containerd[1487]: time="2025-02-13T19:17:48.337169752Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\"" Feb 13 19:17:48.337644 containerd[1487]: time="2025-02-13T19:17:48.337624925Z" level=info msg="StartContainer for \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\"" Feb 13 19:17:48.355784 systemd[1]: run-containerd-runc-k8s.io-e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b-runc.xQDJqp.mount: Deactivated successfully. Feb 13 19:17:48.366147 systemd[1]: Started cri-containerd-e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b.scope - libcontainer container e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b. Feb 13 19:17:48.387130 containerd[1487]: time="2025-02-13T19:17:48.387091758Z" level=info msg="StartContainer for \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\" returns successfully" Feb 13 19:17:48.437875 systemd[1]: cri-containerd-e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b.scope: Deactivated successfully. Feb 13 19:17:48.539205 containerd[1487]: time="2025-02-13T19:17:48.533131538Z" level=info msg="shim disconnected" id=e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b namespace=k8s.io Feb 13 19:17:48.539205 containerd[1487]: time="2025-02-13T19:17:48.539139042Z" level=warning msg="cleaning up after shim disconnected" id=e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b namespace=k8s.io Feb 13 19:17:48.539205 containerd[1487]: time="2025-02-13T19:17:48.539153124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:17:49.091076 containerd[1487]: time="2025-02-13T19:17:49.091004004Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:17:49.117144 containerd[1487]: time="2025-02-13T19:17:49.117081192Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\"" Feb 13 19:17:49.117668 containerd[1487]: time="2025-02-13T19:17:49.117643655Z" level=info msg="StartContainer for \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\"" Feb 13 19:17:49.145181 systemd[1]: Started cri-containerd-26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865.scope - libcontainer container 26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865. Feb 13 19:17:49.165795 containerd[1487]: time="2025-02-13T19:17:49.165674850Z" level=info msg="StartContainer for \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\" returns successfully" Feb 13 19:17:49.187791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:17:49.188017 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:49.188248 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:49.194287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:49.194445 systemd[1]: cri-containerd-26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865.scope: Deactivated successfully. Feb 13 19:17:49.206209 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:49.226608 containerd[1487]: time="2025-02-13T19:17:49.226536277Z" level=info msg="shim disconnected" id=26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865 namespace=k8s.io Feb 13 19:17:49.226608 containerd[1487]: time="2025-02-13T19:17:49.226595043Z" level=warning msg="cleaning up after shim disconnected" id=26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865 namespace=k8s.io Feb 13 19:17:49.226608 containerd[1487]: time="2025-02-13T19:17:49.226614845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:17:49.336636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b-rootfs.mount: Deactivated successfully. Feb 13 19:17:49.584798 containerd[1487]: time="2025-02-13T19:17:49.584735458Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:49.585501 containerd[1487]: time="2025-02-13T19:17:49.585210551Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:17:49.586093 containerd[1487]: time="2025-02-13T19:17:49.586040003Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:17:49.587515 containerd[1487]: time="2025-02-13T19:17:49.587455761Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.288521286s" Feb 13 19:17:49.587515 containerd[1487]: time="2025-02-13T19:17:49.587489685Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:17:49.589608 containerd[1487]: time="2025-02-13T19:17:49.589360854Z" level=info msg="CreateContainer within sandbox \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:17:49.599485 containerd[1487]: time="2025-02-13T19:17:49.599435497Z" level=info msg="CreateContainer within sandbox \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\"" Feb 13 19:17:49.600221 containerd[1487]: time="2025-02-13T19:17:49.600160538Z" level=info msg="StartContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\"" Feb 13 19:17:49.626184 systemd[1]: Started cri-containerd-d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850.scope - libcontainer container d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850. Feb 13 19:17:49.666115 containerd[1487]: time="2025-02-13T19:17:49.666066367Z" level=info msg="StartContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" returns successfully" Feb 13 19:17:50.092008 containerd[1487]: time="2025-02-13T19:17:50.091908214Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:17:50.098421 kubelet[2573]: I0213 19:17:50.098358 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-stq77" podStartSLOduration=1.458176995 podStartE2EDuration="9.098339537s" podCreationTimestamp="2025-02-13 19:17:41 +0000 UTC" firstStartedPulling="2025-02-13 19:17:41.947982416 +0000 UTC m=+5.997077343" lastFinishedPulling="2025-02-13 19:17:49.588144998 +0000 UTC m=+13.637239885" observedRunningTime="2025-02-13 19:17:50.098037425 +0000 UTC m=+14.147132352" watchObservedRunningTime="2025-02-13 19:17:50.098339537 +0000 UTC m=+14.147434504" Feb 13 19:17:50.109309 containerd[1487]: time="2025-02-13T19:17:50.109192971Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\"" Feb 13 19:17:50.112045 containerd[1487]: time="2025-02-13T19:17:50.110451744Z" level=info msg="StartContainer for \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\"" Feb 13 19:17:50.166284 systemd[1]: Started cri-containerd-b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e.scope - libcontainer container b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e. Feb 13 19:17:50.193794 containerd[1487]: time="2025-02-13T19:17:50.193681869Z" level=info msg="StartContainer for \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\" returns successfully" Feb 13 19:17:50.207955 systemd[1]: cri-containerd-b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e.scope: Deactivated successfully. Feb 13 19:17:50.252809 containerd[1487]: time="2025-02-13T19:17:50.252752065Z" level=info msg="shim disconnected" id=b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e namespace=k8s.io Feb 13 19:17:50.252809 containerd[1487]: time="2025-02-13T19:17:50.252805591Z" level=warning msg="cleaning up after shim disconnected" id=b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e namespace=k8s.io Feb 13 19:17:50.252809 containerd[1487]: time="2025-02-13T19:17:50.252815872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:17:50.265588 containerd[1487]: time="2025-02-13T19:17:50.265537384Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:17:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:17:51.097851 containerd[1487]: time="2025-02-13T19:17:51.094666793Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:17:51.127556 containerd[1487]: time="2025-02-13T19:17:51.127504161Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\"" Feb 13 19:17:51.128485 containerd[1487]: time="2025-02-13T19:17:51.128456937Z" level=info msg="StartContainer for \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\"" Feb 13 19:17:51.165160 systemd[1]: Started cri-containerd-a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe.scope - libcontainer container a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe. Feb 13 19:17:51.186655 systemd[1]: cri-containerd-a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe.scope: Deactivated successfully. Feb 13 19:17:51.190526 containerd[1487]: time="2025-02-13T19:17:51.190464622Z" level=info msg="StartContainer for \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\" returns successfully" Feb 13 19:17:51.211079 containerd[1487]: time="2025-02-13T19:17:51.210932496Z" level=info msg="shim disconnected" id=a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe namespace=k8s.io Feb 13 19:17:51.211283 containerd[1487]: time="2025-02-13T19:17:51.211106794Z" level=warning msg="cleaning up after shim disconnected" id=a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe namespace=k8s.io Feb 13 19:17:51.211283 containerd[1487]: time="2025-02-13T19:17:51.211119075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:17:51.351689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe-rootfs.mount: Deactivated successfully. Feb 13 19:17:52.106144 containerd[1487]: time="2025-02-13T19:17:52.106072292Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:17:52.124380 containerd[1487]: time="2025-02-13T19:17:52.124325258Z" level=info msg="CreateContainer within sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\"" Feb 13 19:17:52.124867 containerd[1487]: time="2025-02-13T19:17:52.124843308Z" level=info msg="StartContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\"" Feb 13 19:17:52.161221 systemd[1]: Started cri-containerd-b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656.scope - libcontainer container b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656. Feb 13 19:17:52.186112 containerd[1487]: time="2025-02-13T19:17:52.186070191Z" level=info msg="StartContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" returns successfully" Feb 13 19:17:52.311493 kubelet[2573]: I0213 19:17:52.311452 2573 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:17:52.347325 systemd[1]: Created slice kubepods-burstable-poda0713a8b_b665_4cfa_b187_2cdee1c9c296.slice - libcontainer container kubepods-burstable-poda0713a8b_b665_4cfa_b187_2cdee1c9c296.slice. Feb 13 19:17:52.356828 systemd[1]: Created slice kubepods-burstable-podf2e5ac95_bb09_401d_897b_1d046ef5b6ea.slice - libcontainer container kubepods-burstable-podf2e5ac95_bb09_401d_897b_1d046ef5b6ea.slice. Feb 13 19:17:52.528200 kubelet[2573]: I0213 19:17:52.528157 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0713a8b-b665-4cfa-b187-2cdee1c9c296-config-volume\") pod \"coredns-6f6b679f8f-8m8xx\" (UID: \"a0713a8b-b665-4cfa-b187-2cdee1c9c296\") " pod="kube-system/coredns-6f6b679f8f-8m8xx" Feb 13 19:17:52.528342 kubelet[2573]: I0213 19:17:52.528205 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nxt\" (UniqueName: \"kubernetes.io/projected/a0713a8b-b665-4cfa-b187-2cdee1c9c296-kube-api-access-d4nxt\") pod \"coredns-6f6b679f8f-8m8xx\" (UID: \"a0713a8b-b665-4cfa-b187-2cdee1c9c296\") " pod="kube-system/coredns-6f6b679f8f-8m8xx" Feb 13 19:17:52.528342 kubelet[2573]: I0213 19:17:52.528233 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f95pd\" (UniqueName: \"kubernetes.io/projected/f2e5ac95-bb09-401d-897b-1d046ef5b6ea-kube-api-access-f95pd\") pod \"coredns-6f6b679f8f-42dxv\" (UID: \"f2e5ac95-bb09-401d-897b-1d046ef5b6ea\") " pod="kube-system/coredns-6f6b679f8f-42dxv" Feb 13 19:17:52.528342 kubelet[2573]: I0213 19:17:52.528270 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2e5ac95-bb09-401d-897b-1d046ef5b6ea-config-volume\") pod \"coredns-6f6b679f8f-42dxv\" (UID: \"f2e5ac95-bb09-401d-897b-1d046ef5b6ea\") " pod="kube-system/coredns-6f6b679f8f-42dxv" Feb 13 19:17:52.656211 containerd[1487]: time="2025-02-13T19:17:52.656119103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8m8xx,Uid:a0713a8b-b665-4cfa-b187-2cdee1c9c296,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:52.661429 containerd[1487]: time="2025-02-13T19:17:52.660822078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-42dxv,Uid:f2e5ac95-bb09-401d-897b-1d046ef5b6ea,Namespace:kube-system,Attempt:0,}" Feb 13 19:17:53.137179 kubelet[2573]: I0213 19:17:53.137108 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xcsnw" podStartSLOduration=5.543986962 podStartE2EDuration="12.137082446s" podCreationTimestamp="2025-02-13 19:17:41 +0000 UTC" firstStartedPulling="2025-02-13 19:17:41.705466587 +0000 UTC m=+5.754561514" lastFinishedPulling="2025-02-13 19:17:48.298562071 +0000 UTC m=+12.347656998" observedRunningTime="2025-02-13 19:17:53.132120028 +0000 UTC m=+17.181214955" watchObservedRunningTime="2025-02-13 19:17:53.137082446 +0000 UTC m=+17.186177333" Feb 13 19:17:54.432874 systemd-networkd[1402]: cilium_host: Link UP Feb 13 19:17:54.432984 systemd-networkd[1402]: cilium_net: Link UP Feb 13 19:17:54.433151 systemd-networkd[1402]: cilium_net: Gained carrier Feb 13 19:17:54.433274 systemd-networkd[1402]: cilium_host: Gained carrier Feb 13 19:17:54.517308 systemd-networkd[1402]: cilium_vxlan: Link UP Feb 13 19:17:54.517314 systemd-networkd[1402]: cilium_vxlan: Gained carrier Feb 13 19:17:54.854033 kernel: NET: Registered PF_ALG protocol family Feb 13 19:17:54.914181 systemd-networkd[1402]: cilium_net: Gained IPv6LL Feb 13 19:17:55.210118 systemd-networkd[1402]: cilium_host: Gained IPv6LL Feb 13 19:17:55.515105 systemd-networkd[1402]: lxc_health: Link UP Feb 13 19:17:55.519146 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 19:17:55.830221 systemd-networkd[1402]: lxc882f37b0d493: Link UP Feb 13 19:17:55.847021 kernel: eth0: renamed from tmpd4157 Feb 13 19:17:55.867059 kernel: eth0: renamed from tmpef08a Feb 13 19:17:55.871722 systemd-networkd[1402]: lxc882f37b0d493: Gained carrier Feb 13 19:17:55.872420 systemd-networkd[1402]: lxc06816813230f: Link UP Feb 13 19:17:55.874818 systemd-networkd[1402]: lxc06816813230f: Gained carrier Feb 13 19:17:56.426198 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Feb 13 19:17:57.322496 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 19:17:57.770156 systemd-networkd[1402]: lxc882f37b0d493: Gained IPv6LL Feb 13 19:17:57.899117 systemd-networkd[1402]: lxc06816813230f: Gained IPv6LL Feb 13 19:17:59.375769 containerd[1487]: time="2025-02-13T19:17:59.375683575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:59.375769 containerd[1487]: time="2025-02-13T19:17:59.375742220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:59.375769 containerd[1487]: time="2025-02-13T19:17:59.375754340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:59.376163 containerd[1487]: time="2025-02-13T19:17:59.375899471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:59.381655 containerd[1487]: time="2025-02-13T19:17:59.380512801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:17:59.381655 containerd[1487]: time="2025-02-13T19:17:59.381363102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:17:59.381655 containerd[1487]: time="2025-02-13T19:17:59.381380423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:59.381655 containerd[1487]: time="2025-02-13T19:17:59.381467830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:17:59.405153 systemd[1]: Started cri-containerd-ef08a196d73bc4a80d1a7ed750a39228432569bd64a0df0fd220a0a3741716c9.scope - libcontainer container ef08a196d73bc4a80d1a7ed750a39228432569bd64a0df0fd220a0a3741716c9. Feb 13 19:17:59.408565 systemd[1]: Started cri-containerd-d415719309d462a40e6698fa6ff027290078994427ec2c41a10856380f0fa7c6.scope - libcontainer container d415719309d462a40e6698fa6ff027290078994427ec2c41a10856380f0fa7c6. Feb 13 19:17:59.417049 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:17:59.419975 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:17:59.435253 containerd[1487]: time="2025-02-13T19:17:59.435170075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8m8xx,Uid:a0713a8b-b665-4cfa-b187-2cdee1c9c296,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef08a196d73bc4a80d1a7ed750a39228432569bd64a0df0fd220a0a3741716c9\"" Feb 13 19:17:59.437258 containerd[1487]: time="2025-02-13T19:17:59.437231343Z" level=info msg="CreateContainer within sandbox \"ef08a196d73bc4a80d1a7ed750a39228432569bd64a0df0fd220a0a3741716c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:17:59.441304 containerd[1487]: time="2025-02-13T19:17:59.441219949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-42dxv,Uid:f2e5ac95-bb09-401d-897b-1d046ef5b6ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d415719309d462a40e6698fa6ff027290078994427ec2c41a10856380f0fa7c6\"" Feb 13 19:17:59.445158 containerd[1487]: time="2025-02-13T19:17:59.445127508Z" level=info msg="CreateContainer within sandbox \"d415719309d462a40e6698fa6ff027290078994427ec2c41a10856380f0fa7c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:17:59.451292 containerd[1487]: time="2025-02-13T19:17:59.451159340Z" level=info msg="CreateContainer within sandbox \"ef08a196d73bc4a80d1a7ed750a39228432569bd64a0df0fd220a0a3741716c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b838ce2605b70595831150653105d9d6f8388e9eb2052b5d056c60f04a6364dc\"" Feb 13 19:17:59.451846 containerd[1487]: time="2025-02-13T19:17:59.451818628Z" level=info msg="StartContainer for \"b838ce2605b70595831150653105d9d6f8388e9eb2052b5d056c60f04a6364dc\"" Feb 13 19:17:59.463381 containerd[1487]: time="2025-02-13T19:17:59.463266727Z" level=info msg="CreateContainer within sandbox \"d415719309d462a40e6698fa6ff027290078994427ec2c41a10856380f0fa7c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64d837f5f5f54289d7b6b94354492414a62def059078ef19ebe8cdfc9d61fba1\"" Feb 13 19:17:59.463842 containerd[1487]: time="2025-02-13T19:17:59.463804566Z" level=info msg="StartContainer for \"64d837f5f5f54289d7b6b94354492414a62def059078ef19ebe8cdfc9d61fba1\"" Feb 13 19:17:59.482628 systemd[1]: Started cri-containerd-b838ce2605b70595831150653105d9d6f8388e9eb2052b5d056c60f04a6364dc.scope - libcontainer container b838ce2605b70595831150653105d9d6f8388e9eb2052b5d056c60f04a6364dc. Feb 13 19:17:59.499182 systemd[1]: Started cri-containerd-64d837f5f5f54289d7b6b94354492414a62def059078ef19ebe8cdfc9d61fba1.scope - libcontainer container 64d837f5f5f54289d7b6b94354492414a62def059078ef19ebe8cdfc9d61fba1. Feb 13 19:17:59.518607 containerd[1487]: time="2025-02-13T19:17:59.518557287Z" level=info msg="StartContainer for \"b838ce2605b70595831150653105d9d6f8388e9eb2052b5d056c60f04a6364dc\" returns successfully" Feb 13 19:17:59.533203 containerd[1487]: time="2025-02-13T19:17:59.533150812Z" level=info msg="StartContainer for \"64d837f5f5f54289d7b6b94354492414a62def059078ef19ebe8cdfc9d61fba1\" returns successfully" Feb 13 19:18:00.129000 kubelet[2573]: I0213 19:18:00.128923 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-42dxv" podStartSLOduration=19.128906966 podStartE2EDuration="19.128906966s" podCreationTimestamp="2025-02-13 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:18:00.128427853 +0000 UTC m=+24.177522780" watchObservedRunningTime="2025-02-13 19:18:00.128906966 +0000 UTC m=+24.178001893" Feb 13 19:18:00.156334 kubelet[2573]: I0213 19:18:00.154797 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8m8xx" podStartSLOduration=19.154779428 podStartE2EDuration="19.154779428s" podCreationTimestamp="2025-02-13 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:18:00.13912115 +0000 UTC m=+24.188216077" watchObservedRunningTime="2025-02-13 19:18:00.154779428 +0000 UTC m=+24.203874355" Feb 13 19:18:03.221958 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:49280.service - OpenSSH per-connection server daemon (10.0.0.1:49280). Feb 13 19:18:03.275942 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 49280 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:03.277487 sshd-session[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:03.282051 systemd-logind[1464]: New session 8 of user core. Feb 13 19:18:03.296207 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:18:03.444933 sshd[3989]: Connection closed by 10.0.0.1 port 49280 Feb 13 19:18:03.445693 sshd-session[3987]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:03.449151 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:49280.service: Deactivated successfully. Feb 13 19:18:03.450664 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:18:03.452265 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:18:03.455204 systemd-logind[1464]: Removed session 8. Feb 13 19:18:08.459403 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:49286.service - OpenSSH per-connection server daemon (10.0.0.1:49286). Feb 13 19:18:08.505369 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 49286 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:08.506729 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:08.516961 systemd-logind[1464]: New session 9 of user core. Feb 13 19:18:08.525660 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:18:08.651702 sshd[4008]: Connection closed by 10.0.0.1 port 49286 Feb 13 19:18:08.652306 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:08.655286 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:49286.service: Deactivated successfully. Feb 13 19:18:08.656889 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:18:08.661467 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:18:08.662625 systemd-logind[1464]: Removed session 9. Feb 13 19:18:13.663410 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:43538.service - OpenSSH per-connection server daemon (10.0.0.1:43538). Feb 13 19:18:13.707949 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 43538 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:13.709628 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:13.713698 systemd-logind[1464]: New session 10 of user core. Feb 13 19:18:13.728206 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:18:13.843295 sshd[4027]: Connection closed by 10.0.0.1 port 43538 Feb 13 19:18:13.844190 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:13.847417 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:43538.service: Deactivated successfully. Feb 13 19:18:13.849205 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:18:13.849824 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:18:13.850584 systemd-logind[1464]: Removed session 10. Feb 13 19:18:18.858531 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:43540.service - OpenSSH per-connection server daemon (10.0.0.1:43540). Feb 13 19:18:18.911358 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 43540 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:18.912694 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:18.920802 systemd-logind[1464]: New session 11 of user core. Feb 13 19:18:18.932199 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:18:19.048887 sshd[4043]: Connection closed by 10.0.0.1 port 43540 Feb 13 19:18:19.048646 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:19.063297 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:43540.service: Deactivated successfully. Feb 13 19:18:19.065933 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:18:19.067845 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:18:19.077332 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:43552.service - OpenSSH per-connection server daemon (10.0.0.1:43552). Feb 13 19:18:19.078735 systemd-logind[1464]: Removed session 11. Feb 13 19:18:19.119915 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 43552 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:19.121769 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:19.126022 systemd-logind[1464]: New session 12 of user core. Feb 13 19:18:19.132247 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:18:19.306000 sshd[4059]: Connection closed by 10.0.0.1 port 43552 Feb 13 19:18:19.309955 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:19.324119 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:43552.service: Deactivated successfully. Feb 13 19:18:19.328629 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:18:19.329660 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:18:19.338545 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Feb 13 19:18:19.340375 systemd-logind[1464]: Removed session 12. Feb 13 19:18:19.384849 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:19.386094 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:19.390067 systemd-logind[1464]: New session 13 of user core. Feb 13 19:18:19.401170 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:18:19.516398 sshd[4073]: Connection closed by 10.0.0.1 port 43556 Feb 13 19:18:19.515820 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:19.520413 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:43556.service: Deactivated successfully. Feb 13 19:18:19.522147 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:18:19.524304 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:18:19.525470 systemd-logind[1464]: Removed session 13. Feb 13 19:18:24.531228 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). Feb 13 19:18:24.584246 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:24.585562 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:24.591751 systemd-logind[1464]: New session 14 of user core. Feb 13 19:18:24.601159 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:18:24.723281 sshd[4090]: Connection closed by 10.0.0.1 port 34444 Feb 13 19:18:24.723805 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:24.726627 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:18:24.726869 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:34444.service: Deactivated successfully. Feb 13 19:18:24.729555 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:18:24.731368 systemd-logind[1464]: Removed session 14. Feb 13 19:18:29.750374 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:34458.service - OpenSSH per-connection server daemon (10.0.0.1:34458). Feb 13 19:18:29.796023 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 34458 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:29.796486 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:29.801259 systemd-logind[1464]: New session 15 of user core. Feb 13 19:18:29.810161 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:18:29.946153 sshd[4105]: Connection closed by 10.0.0.1 port 34458 Feb 13 19:18:29.946573 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:29.954967 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:34458.service: Deactivated successfully. Feb 13 19:18:29.957473 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:18:29.958744 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:18:29.961941 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:34468.service - OpenSSH per-connection server daemon (10.0.0.1:34468). Feb 13 19:18:29.964770 systemd-logind[1464]: Removed session 15. Feb 13 19:18:30.011204 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 34468 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:30.012383 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:30.018817 systemd-logind[1464]: New session 16 of user core. Feb 13 19:18:30.029226 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:18:30.275107 sshd[4120]: Connection closed by 10.0.0.1 port 34468 Feb 13 19:18:30.275512 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:30.290256 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:34468.service: Deactivated successfully. Feb 13 19:18:30.291961 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:18:30.293218 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:18:30.294504 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:34470.service - OpenSSH per-connection server daemon (10.0.0.1:34470). Feb 13 19:18:30.295340 systemd-logind[1464]: Removed session 16. Feb 13 19:18:30.341335 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 34470 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:30.342618 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:30.346776 systemd-logind[1464]: New session 17 of user core. Feb 13 19:18:30.356181 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:18:31.810026 sshd[4133]: Connection closed by 10.0.0.1 port 34470 Feb 13 19:18:31.810394 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:31.823047 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:34470.service: Deactivated successfully. Feb 13 19:18:31.828606 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:18:31.835729 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:18:31.844445 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:34478.service - OpenSSH per-connection server daemon (10.0.0.1:34478). Feb 13 19:18:31.848303 systemd-logind[1464]: Removed session 17. Feb 13 19:18:31.889605 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 34478 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:31.891123 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:31.895240 systemd-logind[1464]: New session 18 of user core. Feb 13 19:18:31.904192 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:18:32.136568 sshd[4157]: Connection closed by 10.0.0.1 port 34478 Feb 13 19:18:32.136405 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:32.149397 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:34478.service: Deactivated successfully. Feb 13 19:18:32.151345 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:18:32.153579 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:18:32.163323 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:34482.service - OpenSSH per-connection server daemon (10.0.0.1:34482). Feb 13 19:18:32.164524 systemd-logind[1464]: Removed session 18. Feb 13 19:18:32.205074 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 34482 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:32.206463 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:32.210505 systemd-logind[1464]: New session 19 of user core. Feb 13 19:18:32.218186 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:18:32.329833 sshd[4171]: Connection closed by 10.0.0.1 port 34482 Feb 13 19:18:32.330352 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:32.334248 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:34482.service: Deactivated successfully. Feb 13 19:18:32.335956 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:18:32.337435 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:18:32.338196 systemd-logind[1464]: Removed session 19. Feb 13 19:18:37.347574 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:38994.service - OpenSSH per-connection server daemon (10.0.0.1:38994). Feb 13 19:18:37.388923 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 38994 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:37.390129 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:37.393363 systemd-logind[1464]: New session 20 of user core. Feb 13 19:18:37.402134 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:18:37.506861 sshd[4191]: Connection closed by 10.0.0.1 port 38994 Feb 13 19:18:37.507227 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:37.510752 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:38994.service: Deactivated successfully. Feb 13 19:18:37.513237 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:18:37.515127 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:18:37.515922 systemd-logind[1464]: Removed session 20. Feb 13 19:18:42.520396 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:56106.service - OpenSSH per-connection server daemon (10.0.0.1:56106). Feb 13 19:18:42.566779 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 56106 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:42.567342 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:42.573980 systemd-logind[1464]: New session 21 of user core. Feb 13 19:18:42.579157 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:18:42.710779 sshd[4209]: Connection closed by 10.0.0.1 port 56106 Feb 13 19:18:42.711132 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:42.714522 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:56106.service: Deactivated successfully. Feb 13 19:18:42.717481 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:18:42.724120 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:18:42.725053 systemd-logind[1464]: Removed session 21. Feb 13 19:18:47.724848 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:56116.service - OpenSSH per-connection server daemon (10.0.0.1:56116). Feb 13 19:18:47.769430 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 56116 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:47.770669 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:47.775671 systemd-logind[1464]: New session 22 of user core. Feb 13 19:18:47.788177 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:18:47.902722 sshd[4224]: Connection closed by 10.0.0.1 port 56116 Feb 13 19:18:47.903284 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:47.906764 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:56116.service: Deactivated successfully. Feb 13 19:18:47.910683 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:18:47.911408 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:18:47.912394 systemd-logind[1464]: Removed session 22. Feb 13 19:18:52.914388 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:40062.service - OpenSSH per-connection server daemon (10.0.0.1:40062). Feb 13 19:18:52.957329 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 40062 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:52.958718 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:52.963057 systemd-logind[1464]: New session 23 of user core. Feb 13 19:18:52.970178 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:18:53.081916 sshd[4239]: Connection closed by 10.0.0.1 port 40062 Feb 13 19:18:53.082438 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:53.097230 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:40062.service: Deactivated successfully. Feb 13 19:18:53.098741 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:18:53.100371 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:18:53.110272 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:40074.service - OpenSSH per-connection server daemon (10.0.0.1:40074). Feb 13 19:18:53.111768 systemd-logind[1464]: Removed session 23. Feb 13 19:18:53.156618 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 40074 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:53.157064 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:53.161437 systemd-logind[1464]: New session 24 of user core. Feb 13 19:18:53.172163 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:18:55.219331 containerd[1487]: time="2025-02-13T19:18:55.218343508Z" level=info msg="StopContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" with timeout 30 (s)" Feb 13 19:18:55.219331 containerd[1487]: time="2025-02-13T19:18:55.218851467Z" level=info msg="Stop container \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" with signal terminated" Feb 13 19:18:55.233255 systemd[1]: cri-containerd-d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850.scope: Deactivated successfully. Feb 13 19:18:55.247542 containerd[1487]: time="2025-02-13T19:18:55.247474890Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:55.248070 containerd[1487]: time="2025-02-13T19:18:55.248043288Z" level=info msg="StopContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" with timeout 2 (s)" Feb 13 19:18:55.248427 containerd[1487]: time="2025-02-13T19:18:55.248404408Z" level=info msg="Stop container \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" with signal terminated" Feb 13 19:18:55.256724 systemd-networkd[1402]: lxc_health: Link DOWN Feb 13 19:18:55.256735 systemd-networkd[1402]: lxc_health: Lost carrier Feb 13 19:18:55.261549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850-rootfs.mount: Deactivated successfully. Feb 13 19:18:55.269348 containerd[1487]: time="2025-02-13T19:18:55.269280046Z" level=info msg="shim disconnected" id=d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850 namespace=k8s.io Feb 13 19:18:55.269348 containerd[1487]: time="2025-02-13T19:18:55.269336126Z" level=warning msg="cleaning up after shim disconnected" id=d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850 namespace=k8s.io Feb 13 19:18:55.269348 containerd[1487]: time="2025-02-13T19:18:55.269345246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:55.273180 systemd[1]: cri-containerd-b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656.scope: Deactivated successfully. Feb 13 19:18:55.273477 systemd[1]: cri-containerd-b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656.scope: Consumed 6.590s CPU time, 125M memory peak, 148K read from disk, 12.9M written to disk. Feb 13 19:18:55.312692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656-rootfs.mount: Deactivated successfully. Feb 13 19:18:55.322155 containerd[1487]: time="2025-02-13T19:18:55.322098061Z" level=info msg="shim disconnected" id=b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656 namespace=k8s.io Feb 13 19:18:55.322564 containerd[1487]: time="2025-02-13T19:18:55.322398020Z" level=warning msg="cleaning up after shim disconnected" id=b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656 namespace=k8s.io Feb 13 19:18:55.322564 containerd[1487]: time="2025-02-13T19:18:55.322417540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:55.327107 containerd[1487]: time="2025-02-13T19:18:55.326948491Z" level=info msg="StopContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" returns successfully" Feb 13 19:18:55.327836 containerd[1487]: time="2025-02-13T19:18:55.327805770Z" level=info msg="StopPodSandbox for \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\"" Feb 13 19:18:55.335658 containerd[1487]: time="2025-02-13T19:18:55.335598034Z" level=info msg="Container to stop \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.337454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8-shm.mount: Deactivated successfully. Feb 13 19:18:55.341575 containerd[1487]: time="2025-02-13T19:18:55.341532342Z" level=info msg="StopContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" returns successfully" Feb 13 19:18:55.342190 containerd[1487]: time="2025-02-13T19:18:55.342161301Z" level=info msg="StopPodSandbox for \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\"" Feb 13 19:18:55.342252 containerd[1487]: time="2025-02-13T19:18:55.342215701Z" level=info msg="Container to stop \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.342252 containerd[1487]: time="2025-02-13T19:18:55.342229901Z" level=info msg="Container to stop \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.342252 containerd[1487]: time="2025-02-13T19:18:55.342240061Z" level=info msg="Container to stop \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.342252 containerd[1487]: time="2025-02-13T19:18:55.342248341Z" level=info msg="Container to stop \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.342377 containerd[1487]: time="2025-02-13T19:18:55.342257621Z" level=info msg="Container to stop \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:18:55.344337 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c-shm.mount: Deactivated successfully. Feb 13 19:18:55.345630 systemd[1]: cri-containerd-c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8.scope: Deactivated successfully. Feb 13 19:18:55.348928 systemd[1]: cri-containerd-acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c.scope: Deactivated successfully. Feb 13 19:18:55.376763 containerd[1487]: time="2025-02-13T19:18:55.376694112Z" level=info msg="shim disconnected" id=c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8 namespace=k8s.io Feb 13 19:18:55.376763 containerd[1487]: time="2025-02-13T19:18:55.376750352Z" level=warning msg="cleaning up after shim disconnected" id=c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8 namespace=k8s.io Feb 13 19:18:55.376763 containerd[1487]: time="2025-02-13T19:18:55.376759552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:55.377986 containerd[1487]: time="2025-02-13T19:18:55.377798550Z" level=info msg="shim disconnected" id=acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c namespace=k8s.io Feb 13 19:18:55.377986 containerd[1487]: time="2025-02-13T19:18:55.377848350Z" level=warning msg="cleaning up after shim disconnected" id=acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c namespace=k8s.io Feb 13 19:18:55.377986 containerd[1487]: time="2025-02-13T19:18:55.377857190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:55.388538 containerd[1487]: time="2025-02-13T19:18:55.388478889Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:18:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:18:55.389789 containerd[1487]: time="2025-02-13T19:18:55.389747406Z" level=info msg="TearDown network for sandbox \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\" successfully" Feb 13 19:18:55.389789 containerd[1487]: time="2025-02-13T19:18:55.389781566Z" level=info msg="StopPodSandbox for \"c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8\" returns successfully" Feb 13 19:18:55.389899 containerd[1487]: time="2025-02-13T19:18:55.389763486Z" level=info msg="TearDown network for sandbox \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" successfully" Feb 13 19:18:55.389899 containerd[1487]: time="2025-02-13T19:18:55.389867286Z" level=info msg="StopPodSandbox for \"acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c\" returns successfully" Feb 13 19:18:55.511382 kubelet[2573]: I0213 19:18:55.511330 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cni-path\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511382 kubelet[2573]: I0213 19:18:55.511377 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-xtables-lock\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511403 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hubble-tls\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511428 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg7j5\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-kube-api-access-bg7j5\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511448 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-etc-cni-netd\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511464 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-cgroup\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511478 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-kernel\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511799 kubelet[2573]: I0213 19:18:55.511495 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0179af5-04ad-4b1f-9791-c02326020bf8-cilium-config-path\") pod \"c0179af5-04ad-4b1f-9791-c02326020bf8\" (UID: \"c0179af5-04ad-4b1f-9791-c02326020bf8\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511521 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29v2h\" (UniqueName: \"kubernetes.io/projected/c0179af5-04ad-4b1f-9791-c02326020bf8-kube-api-access-29v2h\") pod \"c0179af5-04ad-4b1f-9791-c02326020bf8\" (UID: \"c0179af5-04ad-4b1f-9791-c02326020bf8\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511540 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-config-path\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511557 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-bpf-maps\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511576 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-clustermesh-secrets\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511590 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hostproc\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.511939 kubelet[2573]: I0213 19:18:55.511603 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-net\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.512119 kubelet[2573]: I0213 19:18:55.511618 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-run\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.512119 kubelet[2573]: I0213 19:18:55.511632 2573 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-lib-modules\") pod \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\" (UID: \"56d30a0c-c539-41dd-80c1-3dd9cb1a2008\") " Feb 13 19:18:55.515759 kubelet[2573]: I0213 19:18:55.514555 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.515759 kubelet[2573]: I0213 19:18:55.514612 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516423 kubelet[2573]: I0213 19:18:55.516091 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0179af5-04ad-4b1f-9791-c02326020bf8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0179af5-04ad-4b1f-9791-c02326020bf8" (UID: "c0179af5-04ad-4b1f-9791-c02326020bf8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:18:55.516423 kubelet[2573]: I0213 19:18:55.516151 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516423 kubelet[2573]: I0213 19:18:55.516168 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516423 kubelet[2573]: I0213 19:18:55.516188 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516423 kubelet[2573]: I0213 19:18:55.516214 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hostproc" (OuterVolumeSpecName: "hostproc") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516588 kubelet[2573]: I0213 19:18:55.516228 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516588 kubelet[2573]: I0213 19:18:55.516242 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cni-path" (OuterVolumeSpecName: "cni-path") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516741 kubelet[2573]: I0213 19:18:55.516698 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:18:55.516774 kubelet[2573]: I0213 19:18:55.516745 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.516797 kubelet[2573]: I0213 19:18:55.516756 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:18:55.517116 kubelet[2573]: I0213 19:18:55.517078 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-kube-api-access-bg7j5" (OuterVolumeSpecName: "kube-api-access-bg7j5") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "kube-api-access-bg7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:18:55.518135 kubelet[2573]: I0213 19:18:55.518049 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:18:55.518652 kubelet[2573]: I0213 19:18:55.518616 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56d30a0c-c539-41dd-80c1-3dd9cb1a2008" (UID: "56d30a0c-c539-41dd-80c1-3dd9cb1a2008"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:18:55.519079 kubelet[2573]: I0213 19:18:55.518963 2573 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0179af5-04ad-4b1f-9791-c02326020bf8-kube-api-access-29v2h" (OuterVolumeSpecName: "kube-api-access-29v2h") pod "c0179af5-04ad-4b1f-9791-c02326020bf8" (UID: "c0179af5-04ad-4b1f-9791-c02326020bf8"). InnerVolumeSpecName "kube-api-access-29v2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:18:55.612385 kubelet[2573]: I0213 19:18:55.612345 2573 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612557 2573 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612574 2573 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612584 2573 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612596 2573 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612604 2573 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612612 2573 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612620 2573 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bg7j5\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-kube-api-access-bg7j5\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612695 kubelet[2573]: I0213 19:18:55.612628 2573 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612635 2573 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612643 2573 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612651 2573 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612658 2573 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0179af5-04ad-4b1f-9791-c02326020bf8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612665 2573 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-29v2h\" (UniqueName: \"kubernetes.io/projected/c0179af5-04ad-4b1f-9791-c02326020bf8-kube-api-access-29v2h\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612673 2573 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:55.612890 kubelet[2573]: I0213 19:18:55.612680 2573 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56d30a0c-c539-41dd-80c1-3dd9cb1a2008-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:18:56.038317 systemd[1]: Removed slice kubepods-besteffort-podc0179af5_04ad_4b1f_9791_c02326020bf8.slice - libcontainer container kubepods-besteffort-podc0179af5_04ad_4b1f_9791_c02326020bf8.slice. Feb 13 19:18:56.040637 systemd[1]: Removed slice kubepods-burstable-pod56d30a0c_c539_41dd_80c1_3dd9cb1a2008.slice - libcontainer container kubepods-burstable-pod56d30a0c_c539_41dd_80c1_3dd9cb1a2008.slice. Feb 13 19:18:56.040837 systemd[1]: kubepods-burstable-pod56d30a0c_c539_41dd_80c1_3dd9cb1a2008.slice: Consumed 6.729s CPU time, 125.3M memory peak, 168K read from disk, 12.9M written to disk. Feb 13 19:18:56.093438 kubelet[2573]: E0213 19:18:56.093400 2573 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:18:56.222205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c452b103969f15533065b064170560cdab2d19ad1d28062d35445193495ce6d8-rootfs.mount: Deactivated successfully. Feb 13 19:18:56.222317 systemd[1]: var-lib-kubelet-pods-c0179af5\x2d04ad\x2d4b1f\x2d9791\x2dc02326020bf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29v2h.mount: Deactivated successfully. Feb 13 19:18:56.222374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc1e86f2213f1c3adf7a6814fa4f7dd8ad59fe350de57040959393132ed582c-rootfs.mount: Deactivated successfully. Feb 13 19:18:56.222429 systemd[1]: var-lib-kubelet-pods-56d30a0c\x2dc539\x2d41dd\x2d80c1\x2d3dd9cb1a2008-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbg7j5.mount: Deactivated successfully. Feb 13 19:18:56.222481 systemd[1]: var-lib-kubelet-pods-56d30a0c\x2dc539\x2d41dd\x2d80c1\x2d3dd9cb1a2008-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:18:56.222552 systemd[1]: var-lib-kubelet-pods-56d30a0c\x2dc539\x2d41dd\x2d80c1\x2d3dd9cb1a2008-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:18:56.234927 kubelet[2573]: I0213 19:18:56.234907 2573 scope.go:117] "RemoveContainer" containerID="b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656" Feb 13 19:18:56.238367 containerd[1487]: time="2025-02-13T19:18:56.238322218Z" level=info msg="RemoveContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\"" Feb 13 19:18:56.244802 containerd[1487]: time="2025-02-13T19:18:56.244765650Z" level=info msg="RemoveContainer for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" returns successfully" Feb 13 19:18:56.245815 kubelet[2573]: I0213 19:18:56.245777 2573 scope.go:117] "RemoveContainer" containerID="a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe" Feb 13 19:18:56.247641 containerd[1487]: time="2025-02-13T19:18:56.247587966Z" level=info msg="RemoveContainer for \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\"" Feb 13 19:18:56.250222 containerd[1487]: time="2025-02-13T19:18:56.250184483Z" level=info msg="RemoveContainer for \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\" returns successfully" Feb 13 19:18:56.250379 kubelet[2573]: I0213 19:18:56.250335 2573 scope.go:117] "RemoveContainer" containerID="b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e" Feb 13 19:18:56.251256 containerd[1487]: time="2025-02-13T19:18:56.251224282Z" level=info msg="RemoveContainer for \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\"" Feb 13 19:18:56.253658 containerd[1487]: time="2025-02-13T19:18:56.253614279Z" level=info msg="RemoveContainer for \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\" returns successfully" Feb 13 19:18:56.253801 kubelet[2573]: I0213 19:18:56.253780 2573 scope.go:117] "RemoveContainer" containerID="26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865" Feb 13 19:18:56.254934 containerd[1487]: time="2025-02-13T19:18:56.254908397Z" level=info msg="RemoveContainer for \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\"" Feb 13 19:18:56.258055 containerd[1487]: time="2025-02-13T19:18:56.258021953Z" level=info msg="RemoveContainer for \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\" returns successfully" Feb 13 19:18:56.259508 kubelet[2573]: I0213 19:18:56.259472 2573 scope.go:117] "RemoveContainer" containerID="e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b" Feb 13 19:18:56.260656 containerd[1487]: time="2025-02-13T19:18:56.260621910Z" level=info msg="RemoveContainer for \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\"" Feb 13 19:18:56.263878 containerd[1487]: time="2025-02-13T19:18:56.263833906Z" level=info msg="RemoveContainer for \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\" returns successfully" Feb 13 19:18:56.264212 kubelet[2573]: I0213 19:18:56.264174 2573 scope.go:117] "RemoveContainer" containerID="b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656" Feb 13 19:18:56.264839 containerd[1487]: time="2025-02-13T19:18:56.264480786Z" level=error msg="ContainerStatus for \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\": not found" Feb 13 19:18:56.271411 kubelet[2573]: E0213 19:18:56.271365 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\": not found" containerID="b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656" Feb 13 19:18:56.271523 kubelet[2573]: I0213 19:18:56.271424 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656"} err="failed to get container status \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\": rpc error: code = NotFound desc = an error occurred when try to find container \"b35547256ccf501a837b500b19347ce21ae901e48b9959f3535a99f9db6b5656\": not found" Feb 13 19:18:56.271607 kubelet[2573]: I0213 19:18:56.271527 2573 scope.go:117] "RemoveContainer" containerID="a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe" Feb 13 19:18:56.272601 containerd[1487]: time="2025-02-13T19:18:56.272554616Z" level=error msg="ContainerStatus for \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\": not found" Feb 13 19:18:56.272770 kubelet[2573]: E0213 19:18:56.272713 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\": not found" containerID="a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe" Feb 13 19:18:56.272770 kubelet[2573]: I0213 19:18:56.272740 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe"} err="failed to get container status \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4e5c8fa462cc7026f70e584e588e46b387ba7379aea8185bb57f6872353ccbe\": not found" Feb 13 19:18:56.272770 kubelet[2573]: I0213 19:18:56.272757 2573 scope.go:117] "RemoveContainer" containerID="b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e" Feb 13 19:18:56.273023 containerd[1487]: time="2025-02-13T19:18:56.272898215Z" level=error msg="ContainerStatus for \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\": not found" Feb 13 19:18:56.273091 kubelet[2573]: E0213 19:18:56.273034 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\": not found" containerID="b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e" Feb 13 19:18:56.273091 kubelet[2573]: I0213 19:18:56.273058 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e"} err="failed to get container status \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b52329067d499f980f50fdb9fd9b10c3a4583c005edb0462bbc1c4079eece71e\": not found" Feb 13 19:18:56.273091 kubelet[2573]: I0213 19:18:56.273076 2573 scope.go:117] "RemoveContainer" containerID="26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865" Feb 13 19:18:56.273380 containerd[1487]: time="2025-02-13T19:18:56.273308935Z" level=error msg="ContainerStatus for \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\": not found" Feb 13 19:18:56.273444 kubelet[2573]: E0213 19:18:56.273418 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\": not found" containerID="26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865" Feb 13 19:18:56.273514 kubelet[2573]: I0213 19:18:56.273437 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865"} err="failed to get container status \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\": rpc error: code = NotFound desc = an error occurred when try to find container \"26b931c9661b6a4a89e150b09686ab2220d8a7389d3dfdd23d626cb39fe47865\": not found" Feb 13 19:18:56.273514 kubelet[2573]: I0213 19:18:56.273511 2573 scope.go:117] "RemoveContainer" containerID="e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b" Feb 13 19:18:56.273758 containerd[1487]: time="2025-02-13T19:18:56.273729294Z" level=error msg="ContainerStatus for \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\": not found" Feb 13 19:18:56.273840 kubelet[2573]: E0213 19:18:56.273823 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\": not found" containerID="e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b" Feb 13 19:18:56.273875 kubelet[2573]: I0213 19:18:56.273847 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b"} err="failed to get container status \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2e82370da06910e57ce505ccbe723507297b6936263936081577cd6850b671b\": not found" Feb 13 19:18:56.273875 kubelet[2573]: I0213 19:18:56.273860 2573 scope.go:117] "RemoveContainer" containerID="d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850" Feb 13 19:18:56.274930 containerd[1487]: time="2025-02-13T19:18:56.274903653Z" level=info msg="RemoveContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\"" Feb 13 19:18:56.284602 containerd[1487]: time="2025-02-13T19:18:56.284559161Z" level=info msg="RemoveContainer for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" returns successfully" Feb 13 19:18:56.285001 kubelet[2573]: I0213 19:18:56.284962 2573 scope.go:117] "RemoveContainer" containerID="d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850" Feb 13 19:18:56.285362 containerd[1487]: time="2025-02-13T19:18:56.285274560Z" level=error msg="ContainerStatus for \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\": not found" Feb 13 19:18:56.285462 kubelet[2573]: E0213 19:18:56.285439 2573 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\": not found" containerID="d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850" Feb 13 19:18:56.285516 kubelet[2573]: I0213 19:18:56.285491 2573 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850"} err="failed to get container status \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3ece183055bd3c5a47dc9f871061fe8dfb65ee2d79afefb3a3d2a766435d850\": not found" Feb 13 19:18:57.173803 sshd[4255]: Connection closed by 10.0.0.1 port 40074 Feb 13 19:18:57.175224 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:57.184405 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:40074.service: Deactivated successfully. Feb 13 19:18:57.186513 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:18:57.186812 systemd[1]: session-24.scope: Consumed 1.344s CPU time, 27M memory peak. Feb 13 19:18:57.187388 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:18:57.197593 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:40076.service - OpenSSH per-connection server daemon (10.0.0.1:40076). Feb 13 19:18:57.198370 kubelet[2573]: I0213 19:18:57.198292 2573 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:18:57Z","lastTransitionTime":"2025-02-13T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:18:57.199672 systemd-logind[1464]: Removed session 24. Feb 13 19:18:57.243365 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 40076 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:57.244516 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:57.249197 systemd-logind[1464]: New session 25 of user core. Feb 13 19:18:57.258170 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:18:57.929095 sshd[4417]: Connection closed by 10.0.0.1 port 40076 Feb 13 19:18:57.930512 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:57.941145 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:40076.service: Deactivated successfully. Feb 13 19:18:57.947027 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948893 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="apply-sysctl-overwrites" Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948926 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0179af5-04ad-4b1f-9791-c02326020bf8" containerName="cilium-operator" Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948933 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="mount-bpf-fs" Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948941 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="clean-cilium-state" Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948946 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="cilium-agent" Feb 13 19:18:57.948944 kubelet[2573]: E0213 19:18:57.948953 2573 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="mount-cgroup" Feb 13 19:18:57.949155 kubelet[2573]: I0213 19:18:57.948979 2573 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0179af5-04ad-4b1f-9791-c02326020bf8" containerName="cilium-operator" Feb 13 19:18:57.949155 kubelet[2573]: I0213 19:18:57.948986 2573 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" containerName="cilium-agent" Feb 13 19:18:57.950038 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:18:57.957969 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:40092.service - OpenSSH per-connection server daemon (10.0.0.1:40092). Feb 13 19:18:57.965895 systemd-logind[1464]: Removed session 25. Feb 13 19:18:57.976845 systemd[1]: Created slice kubepods-burstable-pod4e3f4dec_eb25_446f_9e77_b19b77a222d2.slice - libcontainer container kubepods-burstable-pod4e3f4dec_eb25_446f_9e77_b19b77a222d2.slice. Feb 13 19:18:58.012047 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 40092 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:58.012779 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:58.016678 systemd-logind[1464]: New session 26 of user core. Feb 13 19:18:58.023117 kubelet[2573]: I0213 19:18:58.023085 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-cni-path\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023225 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-hostproc\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023250 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-cilium-cgroup\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023266 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-etc-cni-netd\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023280 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-host-proc-sys-net\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023296 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdpq\" (UniqueName: \"kubernetes.io/projected/4e3f4dec-eb25-446f-9e77-b19b77a222d2-kube-api-access-vwdpq\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023527 kubelet[2573]: I0213 19:18:58.023312 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-cilium-run\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023329 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-xtables-lock\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023343 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e3f4dec-eb25-446f-9e77-b19b77a222d2-clustermesh-secrets\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023357 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e3f4dec-eb25-446f-9e77-b19b77a222d2-cilium-config-path\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023372 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4e3f4dec-eb25-446f-9e77-b19b77a222d2-cilium-ipsec-secrets\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023387 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e3f4dec-eb25-446f-9e77-b19b77a222d2-hubble-tls\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023713 kubelet[2573]: I0213 19:18:58.023400 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-bpf-maps\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023832 kubelet[2573]: I0213 19:18:58.023413 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-lib-modules\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.023832 kubelet[2573]: I0213 19:18:58.023430 2573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e3f4dec-eb25-446f-9e77-b19b77a222d2-host-proc-sys-kernel\") pod \"cilium-t7zs9\" (UID: \"4e3f4dec-eb25-446f-9e77-b19b77a222d2\") " pod="kube-system/cilium-t7zs9" Feb 13 19:18:58.028235 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:18:58.036848 kubelet[2573]: I0213 19:18:58.036065 2573 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d30a0c-c539-41dd-80c1-3dd9cb1a2008" path="/var/lib/kubelet/pods/56d30a0c-c539-41dd-80c1-3dd9cb1a2008/volumes" Feb 13 19:18:58.036848 kubelet[2573]: I0213 19:18:58.036615 2573 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0179af5-04ad-4b1f-9791-c02326020bf8" path="/var/lib/kubelet/pods/c0179af5-04ad-4b1f-9791-c02326020bf8/volumes" Feb 13 19:18:58.078971 sshd[4431]: Connection closed by 10.0.0.1 port 40092 Feb 13 19:18:58.079394 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:58.091240 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:40092.service: Deactivated successfully. Feb 13 19:18:58.093042 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:18:58.094987 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:18:58.104366 systemd[1]: Started sshd@26-10.0.0.108:22-10.0.0.1:40102.service - OpenSSH per-connection server daemon (10.0.0.1:40102). Feb 13 19:18:58.105907 systemd-logind[1464]: Removed session 26. Feb 13 19:18:58.150801 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 40102 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:58.152360 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:58.160068 systemd-logind[1464]: New session 27 of user core. Feb 13 19:18:58.172179 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:18:58.283835 containerd[1487]: time="2025-02-13T19:18:58.283780892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7zs9,Uid:4e3f4dec-eb25-446f-9e77-b19b77a222d2,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:58.319166 containerd[1487]: time="2025-02-13T19:18:58.318855179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:58.319166 containerd[1487]: time="2025-02-13T19:18:58.318920140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:58.319166 containerd[1487]: time="2025-02-13T19:18:58.318935180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:58.319166 containerd[1487]: time="2025-02-13T19:18:58.319049620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:58.342203 systemd[1]: Started cri-containerd-9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3.scope - libcontainer container 9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3. Feb 13 19:18:58.363012 containerd[1487]: time="2025-02-13T19:18:58.362891989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7zs9,Uid:4e3f4dec-eb25-446f-9e77-b19b77a222d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\"" Feb 13 19:18:58.373635 containerd[1487]: time="2025-02-13T19:18:58.373578472Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:18:58.436194 containerd[1487]: time="2025-02-13T19:18:58.436121326Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398\"" Feb 13 19:18:58.436638 containerd[1487]: time="2025-02-13T19:18:58.436613326Z" level=info msg="StartContainer for \"381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398\"" Feb 13 19:18:58.463190 systemd[1]: Started cri-containerd-381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398.scope - libcontainer container 381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398. Feb 13 19:18:58.487427 containerd[1487]: time="2025-02-13T19:18:58.487377177Z" level=info msg="StartContainer for \"381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398\" returns successfully" Feb 13 19:18:58.494630 systemd[1]: cri-containerd-381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398.scope: Deactivated successfully. Feb 13 19:18:58.522365 containerd[1487]: time="2025-02-13T19:18:58.522301745Z" level=info msg="shim disconnected" id=381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398 namespace=k8s.io Feb 13 19:18:58.522365 containerd[1487]: time="2025-02-13T19:18:58.522353025Z" level=warning msg="cleaning up after shim disconnected" id=381c1876e1df5c98acf7abb713350302b2ef54ba69b2677743ee5cea5156e398 namespace=k8s.io Feb 13 19:18:58.522365 containerd[1487]: time="2025-02-13T19:18:58.522361825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:59.252283 containerd[1487]: time="2025-02-13T19:18:59.252228522Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:18:59.270628 containerd[1487]: time="2025-02-13T19:18:59.270549299Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53\"" Feb 13 19:18:59.272273 containerd[1487]: time="2025-02-13T19:18:59.271829140Z" level=info msg="StartContainer for \"61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53\"" Feb 13 19:18:59.309184 systemd[1]: Started cri-containerd-61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53.scope - libcontainer container 61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53. Feb 13 19:18:59.335774 systemd[1]: cri-containerd-61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53.scope: Deactivated successfully. Feb 13 19:18:59.349620 containerd[1487]: time="2025-02-13T19:18:59.349552691Z" level=info msg="StartContainer for \"61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53\" returns successfully" Feb 13 19:18:59.408526 containerd[1487]: time="2025-02-13T19:18:59.408455385Z" level=info msg="shim disconnected" id=61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53 namespace=k8s.io Feb 13 19:18:59.408526 containerd[1487]: time="2025-02-13T19:18:59.408517225Z" level=warning msg="cleaning up after shim disconnected" id=61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53 namespace=k8s.io Feb 13 19:18:59.408526 containerd[1487]: time="2025-02-13T19:18:59.408525945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:00.129096 systemd[1]: run-containerd-runc-k8s.io-61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53-runc.VgwMmC.mount: Deactivated successfully. Feb 13 19:19:00.129196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61bf75f39a107df78de78c1090ed21dc0b7afe2b2192f91a8930f02cbf09ad53-rootfs.mount: Deactivated successfully. Feb 13 19:19:00.255915 containerd[1487]: time="2025-02-13T19:19:00.255868053Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:19:00.275980 containerd[1487]: time="2025-02-13T19:19:00.275849884Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6\"" Feb 13 19:19:00.276562 containerd[1487]: time="2025-02-13T19:19:00.276532525Z" level=info msg="StartContainer for \"97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6\"" Feb 13 19:19:00.300254 systemd[1]: Started cri-containerd-97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6.scope - libcontainer container 97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6. Feb 13 19:19:00.333848 systemd[1]: cri-containerd-97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6.scope: Deactivated successfully. Feb 13 19:19:00.346474 containerd[1487]: time="2025-02-13T19:19:00.343114351Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e3f4dec_eb25_446f_9e77_b19b77a222d2.slice/cri-containerd-97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6.scope/memory.events\": no such file or directory" Feb 13 19:19:00.347937 containerd[1487]: time="2025-02-13T19:19:00.347800998Z" level=info msg="StartContainer for \"97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6\" returns successfully" Feb 13 19:19:00.369257 containerd[1487]: time="2025-02-13T19:19:00.369193472Z" level=info msg="shim disconnected" id=97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6 namespace=k8s.io Feb 13 19:19:00.369257 containerd[1487]: time="2025-02-13T19:19:00.369250433Z" level=warning msg="cleaning up after shim disconnected" id=97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6 namespace=k8s.io Feb 13 19:19:00.369257 containerd[1487]: time="2025-02-13T19:19:00.369259473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:01.094556 kubelet[2573]: E0213 19:19:01.094501 2573 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:19:01.129229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a57865d0cdbb02daf79b2895deb2b31aa10d7a785e20b6ccfafcc75990d2f6-rootfs.mount: Deactivated successfully. Feb 13 19:19:01.259331 containerd[1487]: time="2025-02-13T19:19:01.258916812Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:19:01.278519 containerd[1487]: time="2025-02-13T19:19:01.278477655Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7\"" Feb 13 19:19:01.279346 containerd[1487]: time="2025-02-13T19:19:01.279227497Z" level=info msg="StartContainer for \"1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7\"" Feb 13 19:19:01.306177 systemd[1]: Started cri-containerd-1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7.scope - libcontainer container 1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7. Feb 13 19:19:01.325156 systemd[1]: cri-containerd-1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7.scope: Deactivated successfully. Feb 13 19:19:01.328583 containerd[1487]: time="2025-02-13T19:19:01.328449607Z" level=info msg="StartContainer for \"1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7\" returns successfully" Feb 13 19:19:01.347272 containerd[1487]: time="2025-02-13T19:19:01.329616090Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e3f4dec_eb25_446f_9e77_b19b77a222d2.slice/cri-containerd-1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7.scope/memory.events\": no such file or directory" Feb 13 19:19:01.351195 containerd[1487]: time="2025-02-13T19:19:01.351125378Z" level=info msg="shim disconnected" id=1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7 namespace=k8s.io Feb 13 19:19:01.351195 containerd[1487]: time="2025-02-13T19:19:01.351183178Z" level=warning msg="cleaning up after shim disconnected" id=1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7 namespace=k8s.io Feb 13 19:19:01.351195 containerd[1487]: time="2025-02-13T19:19:01.351192178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:02.129323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c99a99e12037700b0c143637b3288d559e7fde84377ebed9b1c601e0534a0a7-rootfs.mount: Deactivated successfully. Feb 13 19:19:02.264157 containerd[1487]: time="2025-02-13T19:19:02.264099786Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:19:02.286899 containerd[1487]: time="2025-02-13T19:19:02.286854731Z" level=info msg="CreateContainer within sandbox \"9399ff8520eb7164c87cc357add1521eb01ede5aec09df32c389230d9bc84bf3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"231cad043547bacacfb3d6ee82fad7cf52f05cb4bb86972ddc9adaebb0df8452\"" Feb 13 19:19:02.287683 containerd[1487]: time="2025-02-13T19:19:02.287653973Z" level=info msg="StartContainer for \"231cad043547bacacfb3d6ee82fad7cf52f05cb4bb86972ddc9adaebb0df8452\"" Feb 13 19:19:02.313187 systemd[1]: Started cri-containerd-231cad043547bacacfb3d6ee82fad7cf52f05cb4bb86972ddc9adaebb0df8452.scope - libcontainer container 231cad043547bacacfb3d6ee82fad7cf52f05cb4bb86972ddc9adaebb0df8452. Feb 13 19:19:02.346419 containerd[1487]: time="2025-02-13T19:19:02.346349221Z" level=info msg="StartContainer for \"231cad043547bacacfb3d6ee82fad7cf52f05cb4bb86972ddc9adaebb0df8452\" returns successfully" Feb 13 19:19:02.653019 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:19:03.279015 kubelet[2573]: I0213 19:19:03.278640 2573 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t7zs9" podStartSLOduration=6.278624623 podStartE2EDuration="6.278624623s" podCreationTimestamp="2025-02-13 19:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:03.278515943 +0000 UTC m=+87.327610870" watchObservedRunningTime="2025-02-13 19:19:03.278624623 +0000 UTC m=+87.327719550" Feb 13 19:19:05.576766 systemd-networkd[1402]: lxc_health: Link UP Feb 13 19:19:05.581132 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 19:19:07.530134 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 19:19:10.918046 sshd[4444]: Connection closed by 10.0.0.1 port 40102 Feb 13 19:19:10.918132 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:10.921242 systemd[1]: sshd@26-10.0.0.108:22-10.0.0.1:40102.service: Deactivated successfully. Feb 13 19:19:10.922962 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:19:10.924392 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:19:10.925292 systemd-logind[1464]: Removed session 27.