Feb 13 15:02:53.916164 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:02:53.916185 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:02:53.916195 kernel: KASLR enabled Feb 13 15:02:53.916201 kernel: efi: EFI v2.7 by EDK II Feb 13 15:02:53.916206 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:02:53.916212 kernel: random: crng init done Feb 13 15:02:53.916219 kernel: secureboot: Secure boot disabled Feb 13 15:02:53.916225 kernel: ACPI: Early table checksum verification disabled Feb 13 15:02:53.916231 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:02:53.916238 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:02:53.916245 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916251 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916256 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916262 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916282 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916290 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916297 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916304 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916310 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:02:53.916316 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:02:53.916322 kernel: NUMA: Failed to initialise from firmware Feb 13 15:02:53.916329 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:02:53.916335 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:02:53.916341 kernel: Zone ranges: Feb 13 15:02:53.916347 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:02:53.916355 kernel: DMA32 empty Feb 13 15:02:53.916361 kernel: Normal empty Feb 13 15:02:53.916367 kernel: Movable zone start for each node Feb 13 15:02:53.916373 kernel: Early memory node ranges Feb 13 15:02:53.916379 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:02:53.916385 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:02:53.916391 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:02:53.916398 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:02:53.916404 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:02:53.916410 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:02:53.916416 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:02:53.916422 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:02:53.916429 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:02:53.916435 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:02:53.916441 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:02:53.916450 kernel: psci: probing for conduit method from ACPI. Feb 13 15:02:53.916456 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:02:53.916463 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:02:53.916471 kernel: psci: Trusted OS migration not required Feb 13 15:02:53.916477 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:02:53.916483 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:02:53.916490 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:02:53.916497 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:02:53.916503 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:02:53.916509 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:02:53.916516 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:02:53.916522 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:02:53.916529 kernel: CPU features: detected: Spectre-v4 Feb 13 15:02:53.916536 kernel: CPU features: detected: Spectre-BHB Feb 13 15:02:53.916543 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:02:53.916549 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:02:53.916556 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:02:53.916562 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:02:53.916569 kernel: alternatives: applying boot alternatives Feb 13 15:02:53.916576 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:02:53.916583 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:02:53.916590 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:02:53.916596 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:02:53.916603 kernel: Fallback order for Node 0: 0 Feb 13 15:02:53.916610 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:02:53.916617 kernel: Policy zone: DMA Feb 13 15:02:53.916623 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:02:53.916629 kernel: software IO TLB: area num 4. Feb 13 15:02:53.916636 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:02:53.916642 kernel: Memory: 2387540K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184748K reserved, 0K cma-reserved) Feb 13 15:02:53.916649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:02:53.916656 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:02:53.916663 kernel: rcu: RCU event tracing is enabled. Feb 13 15:02:53.916669 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:02:53.916676 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:02:53.916682 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:02:53.916690 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:02:53.916697 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:02:53.916703 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:02:53.916709 kernel: GICv3: 256 SPIs implemented Feb 13 15:02:53.916716 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:02:53.916722 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:02:53.916729 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:02:53.916735 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:02:53.916742 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:02:53.916748 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:02:53.916755 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:02:53.916763 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:02:53.916770 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:02:53.916777 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:02:53.916783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:02:53.916790 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:02:53.916796 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:02:53.916803 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:02:53.916809 kernel: arm-pv: using stolen time PV Feb 13 15:02:53.916816 kernel: Console: colour dummy device 80x25 Feb 13 15:02:53.916823 kernel: ACPI: Core revision 20230628 Feb 13 15:02:53.916830 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:02:53.916838 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:02:53.916844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:02:53.916851 kernel: landlock: Up and running. Feb 13 15:02:53.916857 kernel: SELinux: Initializing. Feb 13 15:02:53.916864 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:02:53.916871 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:02:53.916878 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:02:53.916884 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:02:53.916891 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:02:53.916899 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:02:53.916915 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:02:53.916922 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:02:53.916929 kernel: Remapping and enabling EFI services. Feb 13 15:02:53.916936 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:02:53.916943 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:02:53.916949 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:02:53.916956 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:02:53.916963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:02:53.916977 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:02:53.916984 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:02:53.916995 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:02:53.917004 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:02:53.917011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:02:53.917018 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:02:53.917025 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:02:53.917032 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:02:53.917039 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:02:53.917047 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:02:53.917054 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:02:53.917061 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:02:53.917068 kernel: SMP: Total of 4 processors activated. Feb 13 15:02:53.917075 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:02:53.917082 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:02:53.917089 kernel: CPU features: detected: Common not Private translations Feb 13 15:02:53.917096 kernel: CPU features: detected: CRC32 instructions Feb 13 15:02:53.917105 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:02:53.917112 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:02:53.917119 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:02:53.917132 kernel: CPU features: detected: Privileged Access Never Feb 13 15:02:53.917139 kernel: CPU features: detected: RAS Extension Support Feb 13 15:02:53.917146 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:02:53.917153 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:02:53.917160 kernel: alternatives: applying system-wide alternatives Feb 13 15:02:53.917167 kernel: devtmpfs: initialized Feb 13 15:02:53.917174 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:02:53.917183 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:02:53.917190 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:02:53.917197 kernel: SMBIOS 3.0.0 present. Feb 13 15:02:53.917204 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:02:53.917211 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:02:53.917218 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:02:53.917225 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:02:53.917232 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:02:53.917241 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:02:53.917248 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:02:53.917255 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:02:53.917262 kernel: cpuidle: using governor menu Feb 13 15:02:53.917269 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:02:53.917276 kernel: ASID allocator initialised with 32768 entries Feb 13 15:02:53.917282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:02:53.917289 kernel: Serial: AMBA PL011 UART driver Feb 13 15:02:53.917296 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:02:53.917304 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:02:53.917312 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:02:53.917319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:02:53.917325 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:02:53.917333 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:02:53.917340 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:02:53.917346 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:02:53.917354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:02:53.917360 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:02:53.917367 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:02:53.917375 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:02:53.917382 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:02:53.917389 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:02:53.917396 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:02:53.917403 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:02:53.917410 kernel: ACPI: Interpreter enabled Feb 13 15:02:53.917417 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:02:53.917424 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:02:53.917431 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:02:53.917439 kernel: printk: console [ttyAMA0] enabled Feb 13 15:02:53.917446 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:02:53.917589 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:02:53.917660 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:02:53.917724 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:02:53.917786 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:02:53.917849 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:02:53.917860 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:02:53.917867 kernel: PCI host bridge to bus 0000:00 Feb 13 15:02:53.917961 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:02:53.918022 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:02:53.918080 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:02:53.918142 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:02:53.918221 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:02:53.918300 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:02:53.918366 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:02:53.918430 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:02:53.918493 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:02:53.918556 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:02:53.918619 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:02:53.918685 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:02:53.918744 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:02:53.918800 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:02:53.918856 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:02:53.918865 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:02:53.918873 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:02:53.918880 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:02:53.918887 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:02:53.918894 kernel: iommu: Default domain type: Translated Feb 13 15:02:53.918918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:02:53.918928 kernel: efivars: Registered efivars operations Feb 13 15:02:53.918935 kernel: vgaarb: loaded Feb 13 15:02:53.918942 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:02:53.918964 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:02:53.918972 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:02:53.918979 kernel: pnp: PnP ACPI init Feb 13 15:02:53.919061 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:02:53.919075 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:02:53.919082 kernel: NET: Registered PF_INET protocol family Feb 13 15:02:53.919089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:02:53.919096 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:02:53.919103 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:02:53.919110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:02:53.919118 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:02:53.919132 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:02:53.919139 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:02:53.919148 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:02:53.919156 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:02:53.919163 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:02:53.919169 kernel: kvm [1]: HYP mode not available Feb 13 15:02:53.919176 kernel: Initialise system trusted keyrings Feb 13 15:02:53.919183 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:02:53.919190 kernel: Key type asymmetric registered Feb 13 15:02:53.919197 kernel: Asymmetric key parser 'x509' registered Feb 13 15:02:53.919204 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:02:53.919212 kernel: io scheduler mq-deadline registered Feb 13 15:02:53.919219 kernel: io scheduler kyber registered Feb 13 15:02:53.919226 kernel: io scheduler bfq registered Feb 13 15:02:53.919233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:02:53.919240 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:02:53.919248 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:02:53.919321 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:02:53.919331 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:02:53.919338 kernel: thunder_xcv, ver 1.0 Feb 13 15:02:53.919347 kernel: thunder_bgx, ver 1.0 Feb 13 15:02:53.919354 kernel: nicpf, ver 1.0 Feb 13 15:02:53.919360 kernel: nicvf, ver 1.0 Feb 13 15:02:53.919434 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:02:53.919496 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:02:53 UTC (1739458973) Feb 13 15:02:53.919505 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:02:53.919513 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:02:53.919520 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:02:53.919529 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:02:53.919536 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:02:53.919543 kernel: Segment Routing with IPv6 Feb 13 15:02:53.919550 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:02:53.919557 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:02:53.919564 kernel: Key type dns_resolver registered Feb 13 15:02:53.919571 kernel: registered taskstats version 1 Feb 13 15:02:53.919578 kernel: Loading compiled-in X.509 certificates Feb 13 15:02:53.919586 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:02:53.919594 kernel: Key type .fscrypt registered Feb 13 15:02:53.919601 kernel: Key type fscrypt-provisioning registered Feb 13 15:02:53.919609 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:02:53.919616 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:02:53.919624 kernel: ima: No architecture policies found Feb 13 15:02:53.919631 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:02:53.919638 kernel: clk: Disabling unused clocks Feb 13 15:02:53.919645 kernel: Freeing unused kernel memory: 38336K Feb 13 15:02:53.919652 kernel: Run /init as init process Feb 13 15:02:53.919661 kernel: with arguments: Feb 13 15:02:53.919668 kernel: /init Feb 13 15:02:53.919675 kernel: with environment: Feb 13 15:02:53.919681 kernel: HOME=/ Feb 13 15:02:53.919688 kernel: TERM=linux Feb 13 15:02:53.919695 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:02:53.919703 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:02:53.919713 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:02:53.919723 systemd[1]: Detected virtualization kvm. Feb 13 15:02:53.919730 systemd[1]: Detected architecture arm64. Feb 13 15:02:53.919738 systemd[1]: Running in initrd. Feb 13 15:02:53.919745 systemd[1]: No hostname configured, using default hostname. Feb 13 15:02:53.919753 systemd[1]: Hostname set to . Feb 13 15:02:53.919761 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:02:53.919768 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:02:53.919776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:02:53.919785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:02:53.919793 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:02:53.919801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:02:53.919808 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:02:53.919817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:02:53.919825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:02:53.919834 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:02:53.919842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:02:53.919849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:02:53.919857 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:02:53.919864 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:02:53.919872 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:02:53.919879 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:02:53.919887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:02:53.919894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:02:53.919919 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:02:53.919931 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:02:53.919938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:02:53.919946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:02:53.919954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:02:53.919961 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:02:53.919969 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:02:53.919976 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:02:53.919987 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:02:53.919994 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:02:53.920001 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:02:53.920009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:02:53.920016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:02:53.920024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:02:53.920031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:02:53.920041 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:02:53.920049 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:02:53.920056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:02:53.920082 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:02:53.920103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:02:53.920111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:02:53.920124 systemd-journald[238]: Journal started Feb 13 15:02:53.920144 systemd-journald[238]: Runtime Journal (/run/log/journal/a2022ec0416b4103a3d4bc108347631a) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:02:53.911645 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:02:53.924293 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:02:53.926401 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:02:53.928793 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:02:53.928819 kernel: Bridge firewalling registered Feb 13 15:02:53.929260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:02:53.929347 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:02:53.930608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:02:53.935475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:02:53.937519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:02:53.939637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:02:53.941880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:02:53.944284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:02:53.958077 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:02:53.960425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:02:53.971339 dracut-cmdline[276]: dracut-dracut-053 Feb 13 15:02:53.973763 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:02:53.994421 systemd-resolved[278]: Positive Trust Anchors: Feb 13 15:02:53.994439 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:02:53.994469 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:02:53.998998 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 15:02:53.999981 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:02:54.004362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:02:54.042941 kernel: SCSI subsystem initialized Feb 13 15:02:54.047925 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:02:54.054933 kernel: iscsi: registered transport (tcp) Feb 13 15:02:54.067933 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:02:54.067948 kernel: QLogic iSCSI HBA Driver Feb 13 15:02:54.108262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:02:54.121046 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:02:54.138925 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:02:54.138984 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:02:54.138998 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:02:54.189942 kernel: raid6: neonx8 gen() 15675 MB/s Feb 13 15:02:54.204934 kernel: raid6: neonx4 gen() 15823 MB/s Feb 13 15:02:54.221928 kernel: raid6: neonx2 gen() 13086 MB/s Feb 13 15:02:54.238927 kernel: raid6: neonx1 gen() 10388 MB/s Feb 13 15:02:54.255931 kernel: raid6: int64x8 gen() 6785 MB/s Feb 13 15:02:54.272926 kernel: raid6: int64x4 gen() 7291 MB/s Feb 13 15:02:54.289925 kernel: raid6: int64x2 gen() 6104 MB/s Feb 13 15:02:54.307017 kernel: raid6: int64x1 gen() 5009 MB/s Feb 13 15:02:54.307041 kernel: raid6: using algorithm neonx4 gen() 15823 MB/s Feb 13 15:02:54.324995 kernel: raid6: .... xor() 12336 MB/s, rmw enabled Feb 13 15:02:54.325011 kernel: raid6: using neon recovery algorithm Feb 13 15:02:54.330245 kernel: xor: measuring software checksum speed Feb 13 15:02:54.330262 kernel: 8regs : 21653 MB/sec Feb 13 15:02:54.330927 kernel: 32regs : 21664 MB/sec Feb 13 15:02:54.332109 kernel: arm64_neon : 24110 MB/sec Feb 13 15:02:54.332130 kernel: xor: using function: arm64_neon (24110 MB/sec) Feb 13 15:02:54.380933 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:02:54.391964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:02:54.402074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:02:54.416401 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 15:02:54.420095 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:02:54.431077 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:02:54.442256 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 15:02:54.469785 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:02:54.482060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:02:54.523985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:02:54.533068 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:02:54.542817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:02:54.547754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:02:54.549013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:02:54.551757 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:02:54.564058 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:02:54.575255 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:02:54.578718 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:02:54.603030 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:02:54.603157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:02:54.603179 kernel: GPT:9289727 != 19775487 Feb 13 15:02:54.603188 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:02:54.603197 kernel: GPT:9289727 != 19775487 Feb 13 15:02:54.603205 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:02:54.603214 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:02:54.589759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:02:54.589876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:02:54.592670 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:02:54.593799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:02:54.593960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:02:54.595560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:02:54.604365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:02:54.618960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:02:54.627114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:02:54.633590 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (513) Feb 13 15:02:54.633666 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 15:02:54.634824 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:02:54.649996 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:02:54.657758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:02:54.664167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:02:54.665404 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:02:54.674002 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:02:54.689065 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:02:54.699229 disk-uuid[560]: Primary Header is updated. Feb 13 15:02:54.699229 disk-uuid[560]: Secondary Entries is updated. Feb 13 15:02:54.699229 disk-uuid[560]: Secondary Header is updated. Feb 13 15:02:54.709262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:02:55.718923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:02:55.719748 disk-uuid[561]: The operation has completed successfully. Feb 13 15:02:55.750629 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:02:55.750751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:02:55.785068 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:02:55.787728 sh[574]: Success Feb 13 15:02:55.802936 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:02:55.830075 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:02:55.845357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:02:55.846816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:02:55.856620 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:02:55.856656 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:02:55.857736 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:02:55.857753 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:02:55.859183 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:02:55.863031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:02:55.864406 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:02:55.881278 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:02:55.884107 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:02:55.892523 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:02:55.892568 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:02:55.892578 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:02:55.895109 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:02:55.901616 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:02:55.903183 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:02:55.909375 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:02:55.919065 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:02:55.975311 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:02:55.986230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:02:56.013997 ignition[670]: Ignition 2.20.0 Feb 13 15:02:56.014007 ignition[670]: Stage: fetch-offline Feb 13 15:02:56.014047 ignition[670]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:56.014056 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:56.014208 ignition[670]: parsed url from cmdline: "" Feb 13 15:02:56.014212 ignition[670]: no config URL provided Feb 13 15:02:56.014217 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:02:56.014224 ignition[670]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:02:56.018959 systemd-networkd[759]: lo: Link UP Feb 13 15:02:56.014263 ignition[670]: op(1): [started] loading QEMU firmware config module Feb 13 15:02:56.018963 systemd-networkd[759]: lo: Gained carrier Feb 13 15:02:56.014267 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:02:56.019729 systemd-networkd[759]: Enumeration completed Feb 13 15:02:56.019829 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:02:56.020169 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:02:56.026942 ignition[670]: op(1): [finished] loading QEMU firmware config module Feb 13 15:02:56.020172 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:02:56.020874 systemd-networkd[759]: eth0: Link UP Feb 13 15:02:56.020878 systemd-networkd[759]: eth0: Gained carrier Feb 13 15:02:56.020884 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:02:56.022165 systemd[1]: Reached target network.target - Network. Feb 13 15:02:56.039947 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:02:56.071226 ignition[670]: parsing config with SHA512: 0a75d31a6646263137e13f18c793cb2b2d2d875304f7f1e6f556c18d520f7cf72fd57d23d4d9d2587154689ad1c2d3ac583fdfd0262b0986fb5abbca7494b5dd Feb 13 15:02:56.076191 unknown[670]: fetched base config from "system" Feb 13 15:02:56.076208 unknown[670]: fetched user config from "qemu" Feb 13 15:02:56.077078 ignition[670]: fetch-offline: fetch-offline passed Feb 13 15:02:56.078682 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:02:56.077570 ignition[670]: Ignition finished successfully Feb 13 15:02:56.080742 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:02:56.091097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:02:56.103715 ignition[771]: Ignition 2.20.0 Feb 13 15:02:56.103726 ignition[771]: Stage: kargs Feb 13 15:02:56.103890 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:56.103900 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:56.106697 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:02:56.104796 ignition[771]: kargs: kargs passed Feb 13 15:02:56.104839 ignition[771]: Ignition finished successfully Feb 13 15:02:56.117128 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:02:56.126243 ignition[779]: Ignition 2.20.0 Feb 13 15:02:56.126253 ignition[779]: Stage: disks Feb 13 15:02:56.126406 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:56.129016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:02:56.126415 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:56.130144 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:02:56.127270 ignition[779]: disks: disks passed Feb 13 15:02:56.131891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:02:56.127312 ignition[779]: Ignition finished successfully Feb 13 15:02:56.133901 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:02:56.135725 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:02:56.137140 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:02:56.146042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:02:56.156412 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:02:56.159800 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:02:56.161818 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:02:56.206764 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:02:56.208536 kernel: EXT4-fs (vda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:02:56.208008 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:02:56.220031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:02:56.221763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:02:56.222728 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:02:56.222769 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:02:56.222794 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:02:56.228885 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:02:56.231336 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:02:56.235175 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Feb 13 15:02:56.235205 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:02:56.235215 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:02:56.236920 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:02:56.238933 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:02:56.239621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:02:56.271868 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:02:56.277942 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:02:56.281184 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:02:56.284976 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:02:56.353128 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:02:56.359997 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:02:56.361627 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:02:56.367928 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:02:56.383052 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:02:56.384773 ignition[914]: INFO : Ignition 2.20.0 Feb 13 15:02:56.384773 ignition[914]: INFO : Stage: mount Feb 13 15:02:56.384773 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:56.384773 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:56.384773 ignition[914]: INFO : mount: mount passed Feb 13 15:02:56.384773 ignition[914]: INFO : Ignition finished successfully Feb 13 15:02:56.386097 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:02:56.402038 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:02:56.902769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:02:56.913091 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:02:56.921940 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Feb 13 15:02:56.924704 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:02:56.924751 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:02:56.924763 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:02:56.926923 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:02:56.927934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:02:56.950146 ignition[944]: INFO : Ignition 2.20.0 Feb 13 15:02:56.950146 ignition[944]: INFO : Stage: files Feb 13 15:02:56.951772 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:56.951772 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:56.951772 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:02:56.955429 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:02:56.955429 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:02:56.958290 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:02:56.958290 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:02:56.958290 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:02:56.958290 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:02:56.958290 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:02:56.956043 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 15:02:57.005526 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:02:57.219956 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:02:57.221889 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:02:57.221889 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:02:57.623632 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:02:57.753125 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:02:57.755024 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:02:57.762022 systemd-networkd[759]: eth0: Gained IPv6LL Feb 13 15:02:58.062407 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:02:58.627999 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:02:58.627999 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 15:02:58.631630 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:02:58.659440 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:02:58.662825 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:02:58.664512 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:02:58.664512 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:02:58.664512 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:02:58.664512 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:02:58.664512 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:02:58.664512 ignition[944]: INFO : files: files passed Feb 13 15:02:58.664512 ignition[944]: INFO : Ignition finished successfully Feb 13 15:02:58.664898 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:02:58.679159 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:02:58.681038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:02:58.683026 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:02:58.683119 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:02:58.689242 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:02:58.691277 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:02:58.691277 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:02:58.695969 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:02:58.694677 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:02:58.697508 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:02:58.704048 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:02:58.723570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:02:58.723686 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:02:58.726027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:02:58.727901 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:02:58.729890 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:02:58.730811 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:02:58.746392 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:02:58.759088 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:02:58.766953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:02:58.768174 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:02:58.770163 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:02:58.771843 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:02:58.771992 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:02:58.774487 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:02:58.776506 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:02:58.778120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:02:58.779800 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:02:58.781717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:02:58.783649 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:02:58.785454 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:02:58.787396 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:02:58.789380 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:02:58.791123 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:02:58.792676 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:02:58.792811 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:02:58.795131 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:02:58.797120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:02:58.799014 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:02:58.799955 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:02:58.801247 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:02:58.801368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:02:58.804259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:02:58.804377 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:02:58.806350 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:02:58.807939 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:02:58.808044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:02:58.810000 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:02:58.811789 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:02:58.813403 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:02:58.813481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:02:58.815177 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:02:58.815255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:02:58.817432 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:02:58.817553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:02:58.819274 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:02:58.819376 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:02:58.835090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:02:58.836765 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:02:58.837686 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:02:58.837935 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:02:58.839785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:02:58.839886 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:02:58.847081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:02:58.847184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:02:58.850385 ignition[1001]: INFO : Ignition 2.20.0 Feb 13 15:02:58.850385 ignition[1001]: INFO : Stage: umount Feb 13 15:02:58.850385 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:02:58.850385 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:02:58.850385 ignition[1001]: INFO : umount: umount passed Feb 13 15:02:58.850385 ignition[1001]: INFO : Ignition finished successfully Feb 13 15:02:58.850382 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:02:58.851498 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:02:58.854217 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:02:58.854923 systemd[1]: Stopped target network.target - Network. Feb 13 15:02:58.858068 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:02:58.858155 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:02:58.861067 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:02:58.861149 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:02:58.862602 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:02:58.862653 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:02:58.864960 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:02:58.865031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:02:58.867218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:02:58.868881 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:02:58.879643 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:02:58.879752 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:02:58.882841 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:02:58.883141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:02:58.883172 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:02:58.901004 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:02:58.901880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:02:58.901975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:02:58.904008 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:02:58.906154 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:02:58.907549 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:02:58.911879 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:02:58.913444 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:02:58.913534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:02:58.915619 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:02:58.915665 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:02:58.917604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:02:58.917651 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:02:58.921525 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:02:58.921587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:02:58.921861 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:02:58.922028 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:02:58.924708 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:02:58.924805 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:02:58.927290 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:02:58.927357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:02:58.928592 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:02:58.928625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:02:58.930429 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:02:58.930485 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:02:58.933257 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:02:58.933306 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:02:58.935925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:02:58.935975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:02:58.951089 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:02:58.952186 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:02:58.952256 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:02:58.955257 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:02:58.955301 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:02:58.957556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:02:58.957601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:02:58.959586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:02:58.959634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:02:58.963340 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:02:58.963387 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:02:58.963764 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:02:58.963845 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:02:58.978603 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:02:58.978733 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:02:58.980799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:02:58.981924 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:02:58.981993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:02:58.997101 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:02:59.002850 systemd[1]: Switching root. Feb 13 15:02:59.026718 systemd-journald[238]: Journal stopped Feb 13 15:02:59.907716 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:02:59.907775 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:02:59.907791 kernel: SELinux: policy capability open_perms=1 Feb 13 15:02:59.907802 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:02:59.907811 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:02:59.907821 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:02:59.907830 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:02:59.907839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:02:59.907848 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:02:59.907859 kernel: audit: type=1403 audit(1739458979.246:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:02:59.907869 systemd[1]: Successfully loaded SELinux policy in 35.342ms. Feb 13 15:02:59.907885 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.787ms. Feb 13 15:02:59.907896 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:02:59.907922 systemd[1]: Detected virtualization kvm. Feb 13 15:02:59.907934 systemd[1]: Detected architecture arm64. Feb 13 15:02:59.907944 systemd[1]: Detected first boot. Feb 13 15:02:59.907953 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:02:59.907964 zram_generator::config[1047]: No configuration found. Feb 13 15:02:59.907976 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:02:59.907985 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:02:59.907996 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:02:59.908005 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:02:59.908015 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:02:59.908029 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:02:59.908045 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:02:59.908056 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:02:59.908068 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:02:59.908078 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:02:59.908088 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:02:59.908098 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:02:59.908115 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:02:59.908127 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:02:59.908137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:02:59.908147 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:02:59.908158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:02:59.908169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:02:59.908180 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:02:59.908190 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:02:59.908200 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:02:59.908210 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:02:59.908220 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:02:59.908230 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:02:59.908242 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:02:59.908252 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:02:59.908262 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:02:59.908273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:02:59.908283 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:02:59.908293 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:02:59.908306 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:02:59.908316 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:02:59.908326 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:02:59.908337 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:02:59.908347 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:02:59.908357 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:02:59.908386 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:02:59.908396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:02:59.908405 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:02:59.908416 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:02:59.908426 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:02:59.908436 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:02:59.908447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:02:59.908458 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:02:59.908469 systemd[1]: Reached target machines.target - Containers. Feb 13 15:02:59.908479 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:02:59.908489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:02:59.908499 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:02:59.908510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:02:59.908521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:02:59.908532 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:02:59.908544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:02:59.908554 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:02:59.908565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:02:59.908575 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:02:59.908585 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:02:59.908595 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:02:59.908605 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:02:59.908616 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:02:59.908628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:02:59.908638 kernel: fuse: init (API version 7.39) Feb 13 15:02:59.908648 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:02:59.908658 kernel: loop: module loaded Feb 13 15:02:59.908668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:02:59.908677 kernel: ACPI: bus type drm_connector registered Feb 13 15:02:59.908687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:02:59.908697 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:02:59.908707 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:02:59.908719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:02:59.908731 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:02:59.908741 systemd[1]: Stopped verity-setup.service. Feb 13 15:02:59.908777 systemd-journald[1122]: Collecting audit messages is disabled. Feb 13 15:02:59.908811 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:02:59.908822 systemd-journald[1122]: Journal started Feb 13 15:02:59.908842 systemd-journald[1122]: Runtime Journal (/run/log/journal/a2022ec0416b4103a3d4bc108347631a) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:02:59.692319 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:02:59.698829 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:02:59.699212 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:02:59.910851 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:02:59.912824 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:02:59.913559 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:02:59.914701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:02:59.915947 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:02:59.917164 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:02:59.919948 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:02:59.921352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:02:59.922859 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:02:59.923046 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:02:59.924543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:02:59.924720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:02:59.926158 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:02:59.926328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:02:59.927783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:02:59.927964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:02:59.929420 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:02:59.929592 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:02:59.930952 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:02:59.931121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:02:59.932663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:02:59.934157 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:02:59.935726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:02:59.938364 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:02:59.951384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:02:59.953445 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:02:59.966020 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:02:59.968116 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:02:59.969226 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:02:59.969286 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:02:59.971200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:02:59.973371 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:02:59.975431 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:02:59.976555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:02:59.977876 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:02:59.980127 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:02:59.981336 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:02:59.983123 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:02:59.987124 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:02:59.990882 systemd-journald[1122]: Time spent on flushing to /var/log/journal/a2022ec0416b4103a3d4bc108347631a is 18.478ms for 873 entries. Feb 13 15:02:59.990882 systemd-journald[1122]: System Journal (/var/log/journal/a2022ec0416b4103a3d4bc108347631a) is 8M, max 195.6M, 187.6M free. Feb 13 15:03:00.017417 systemd-journald[1122]: Received client request to flush runtime journal. Feb 13 15:03:00.017469 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 15:02:59.992127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:02:59.996171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:03:00.000592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:03:00.004803 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:03:00.010524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:03:00.012578 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:03:00.015181 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:03:00.019739 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:03:00.021313 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:03:00.024598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:03:00.029550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:03:00.031972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:03:00.037696 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 15:03:00.037710 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 15:03:00.040302 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:03:00.041877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:03:00.045620 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:03:00.047312 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:03:00.060411 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 15:03:00.070578 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:03:00.091944 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:03:00.095054 kernel: loop2: detected capacity change from 0 to 123192 Feb 13 15:03:00.102076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:03:00.116033 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:03:00.116049 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 15:03:00.120341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:03:00.125950 kernel: loop3: detected capacity change from 0 to 194096 Feb 13 15:03:00.133067 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 15:03:00.140003 kernel: loop5: detected capacity change from 0 to 123192 Feb 13 15:03:00.146972 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:03:00.147399 (sd-merge)[1192]: Merged extensions into '/usr'. Feb 13 15:03:00.151082 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:03:00.151109 systemd[1]: Reloading... Feb 13 15:03:00.201938 zram_generator::config[1216]: No configuration found. Feb 13 15:03:00.284495 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:03:00.301188 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:03:00.350460 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:03:00.350646 systemd[1]: Reloading finished in 199 ms. Feb 13 15:03:00.368131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:03:00.369612 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:03:00.387223 systemd[1]: Starting ensure-sysext.service... Feb 13 15:03:00.389010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:03:00.401974 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:03:00.401988 systemd[1]: Reloading... Feb 13 15:03:00.406047 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:03:00.406586 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:03:00.407373 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:03:00.407680 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 15:03:00.407798 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Feb 13 15:03:00.417421 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:03:00.417520 systemd-tmpfiles[1255]: Skipping /boot Feb 13 15:03:00.425852 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:03:00.425977 systemd-tmpfiles[1255]: Skipping /boot Feb 13 15:03:00.447977 zram_generator::config[1284]: No configuration found. Feb 13 15:03:00.532971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:03:00.582424 systemd[1]: Reloading finished in 180 ms. Feb 13 15:03:00.594449 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:03:00.607071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:03:00.614481 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:03:00.617011 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:03:00.619308 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:03:00.624277 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:03:00.627238 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:03:00.634639 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:03:00.638094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:03:00.643456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:03:00.647344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:03:00.654782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:03:00.656039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:03:00.656160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:03:00.657188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:03:00.657342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:03:00.660343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:03:00.660481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:03:00.662185 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:03:00.663880 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:03:00.664135 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:03:00.669557 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:03:00.674783 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Feb 13 15:03:00.675856 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:03:00.681708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:03:00.694114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:03:00.698165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:03:00.701924 augenrules[1367]: No rules Feb 13 15:03:00.702266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:03:00.703520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:03:00.703634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:03:00.707207 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:03:00.713215 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:03:00.715961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:03:00.721717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:03:00.725569 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:03:00.725751 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:03:00.727181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:03:00.727324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:03:00.732267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:03:00.733494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:03:00.736338 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:03:00.736498 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:03:00.739471 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:03:00.758541 systemd[1]: Finished ensure-sysext.service. Feb 13 15:03:00.763860 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:03:00.773939 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1373) Feb 13 15:03:00.781089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:03:00.783180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:03:00.784348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:03:00.787085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:03:00.794089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:03:00.798062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:03:00.800188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:03:00.800233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:03:00.802156 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:03:00.807209 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:03:00.808284 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:03:00.808632 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:03:00.810935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:03:00.811091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:03:00.814306 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:03:00.814455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:03:00.815849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:03:00.816049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:03:00.819485 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:03:00.819635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:03:00.821547 augenrules[1395]: /sbin/augenrules: No change Feb 13 15:03:00.828618 augenrules[1428]: No rules Feb 13 15:03:00.830447 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:03:00.832042 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:03:00.837887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:03:00.851115 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:03:00.853002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:03:00.853063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:03:00.858708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:03:00.874332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:03:00.878143 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:03:00.888249 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 15:03:00.888267 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:03:00.888299 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:03:00.891160 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:03:00.896415 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 15:03:00.899803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:03:00.901158 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:03:00.902441 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:03:00.903571 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:03:00.908538 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:03:00.914335 systemd-networkd[1411]: lo: Link UP Feb 13 15:03:00.914343 systemd-networkd[1411]: lo: Gained carrier Feb 13 15:03:00.916928 systemd-networkd[1411]: Enumeration completed Feb 13 15:03:00.917047 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:03:00.917337 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:03:00.917341 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:03:00.917838 systemd-networkd[1411]: eth0: Link UP Feb 13 15:03:00.917842 systemd-networkd[1411]: eth0: Gained carrier Feb 13 15:03:00.917854 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:03:00.918403 systemd[1]: Reached target network.target - Network. Feb 13 15:03:00.926059 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:03:00.928218 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:03:00.931984 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:03:00.932437 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:03:00.933149 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Feb 13 15:03:01.357457 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:03:01.357500 systemd-timesyncd[1412]: Initial clock synchronization to Thu 2025-02-13 15:03:01.357373 UTC. Feb 13 15:03:01.359087 systemd-resolved[1324]: Clock change detected. Flushing caches. Feb 13 15:03:01.359902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:03:01.361793 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:03:01.362947 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:03:01.364096 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:03:01.365426 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:03:01.366929 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:03:01.368075 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:03:01.369297 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:03:01.370491 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:03:01.370532 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:03:01.371404 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:03:01.373278 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:03:01.375674 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:03:01.378773 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:03:01.380139 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:03:01.381395 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:03:01.387169 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:03:01.388584 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:03:01.390771 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:03:01.392536 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:03:01.393949 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:03:01.397064 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:03:01.397442 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:03:01.398049 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:03:01.399051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:03:01.399088 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:03:01.400173 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:03:01.402182 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:03:01.404721 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:03:01.406603 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:03:01.407641 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:03:01.411107 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:03:01.412214 jq[1459]: false Feb 13 15:03:01.416434 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:03:01.419875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:03:01.422466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:03:01.424077 dbus-daemon[1458]: [system] SELinux support is enabled Feb 13 15:03:01.426239 extend-filesystems[1460]: Found loop3 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found loop4 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found loop5 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda1 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda2 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda3 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found usr Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda4 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda6 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda7 Feb 13 15:03:01.426239 extend-filesystems[1460]: Found vda9 Feb 13 15:03:01.426239 extend-filesystems[1460]: Checking size of /dev/vda9 Feb 13 15:03:01.428950 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:03:01.431262 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:03:01.431756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:03:01.433541 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:03:01.436460 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:03:01.438126 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:03:01.442049 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:03:01.443156 extend-filesystems[1460]: Resized partition /dev/vda9 Feb 13 15:03:01.444301 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:03:01.446358 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:03:01.446644 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:03:01.446799 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:03:01.449804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:03:01.452450 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:03:01.449987 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:03:01.458341 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:03:01.459007 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:03:01.459059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:03:01.461726 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:03:01.461756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:03:01.465358 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:03:01.473510 jq[1476]: true Feb 13 15:03:01.486030 update_engine[1474]: I20250213 15:03:01.485857 1474 main.cc:92] Flatcar Update Engine starting Feb 13 15:03:01.489921 tar[1483]: linux-arm64/helm Feb 13 15:03:01.494124 update_engine[1474]: I20250213 15:03:01.494082 1474 update_check_scheduler.cc:74] Next update check in 4m23s Feb 13 15:03:01.495195 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:03:01.497927 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:03:01.500327 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1357) Feb 13 15:03:01.500358 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:03:01.500371 jq[1494]: true Feb 13 15:03:01.517496 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:03:01.517496 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:03:01.517496 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:03:01.525697 extend-filesystems[1460]: Resized filesystem in /dev/vda9 Feb 13 15:03:01.522001 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:03:01.522109 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:03:01.522213 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:03:01.522895 systemd-logind[1472]: New seat seat0. Feb 13 15:03:01.528093 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:03:01.573492 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:03:01.578011 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:03:01.580006 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:03:01.581885 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:03:01.698942 containerd[1487]: time="2025-02-13T15:03:01.698860465Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:03:01.727937 containerd[1487]: time="2025-02-13T15:03:01.727887025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729441 containerd[1487]: time="2025-02-13T15:03:01.729401385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729441 containerd[1487]: time="2025-02-13T15:03:01.729431785Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:03:01.729525 containerd[1487]: time="2025-02-13T15:03:01.729446705Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:03:01.729626 containerd[1487]: time="2025-02-13T15:03:01.729602705Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:03:01.729626 containerd[1487]: time="2025-02-13T15:03:01.729624105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729695 containerd[1487]: time="2025-02-13T15:03:01.729676665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729695 containerd[1487]: time="2025-02-13T15:03:01.729691985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729901 containerd[1487]: time="2025-02-13T15:03:01.729879305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729901 containerd[1487]: time="2025-02-13T15:03:01.729898825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729950 containerd[1487]: time="2025-02-13T15:03:01.729911465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:03:01.729950 containerd[1487]: time="2025-02-13T15:03:01.729920105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.730008 containerd[1487]: time="2025-02-13T15:03:01.729990665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.730188 containerd[1487]: time="2025-02-13T15:03:01.730169465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:03:01.730307 containerd[1487]: time="2025-02-13T15:03:01.730289065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:03:01.730307 containerd[1487]: time="2025-02-13T15:03:01.730305545Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:03:01.730413 containerd[1487]: time="2025-02-13T15:03:01.730394505Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:03:01.730454 containerd[1487]: time="2025-02-13T15:03:01.730440825Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:03:01.733436 containerd[1487]: time="2025-02-13T15:03:01.733410585Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:03:01.733481 containerd[1487]: time="2025-02-13T15:03:01.733453865Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:03:01.733481 containerd[1487]: time="2025-02-13T15:03:01.733469385Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:03:01.733539 containerd[1487]: time="2025-02-13T15:03:01.733485665Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:03:01.733539 containerd[1487]: time="2025-02-13T15:03:01.733499105Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:03:01.733660 containerd[1487]: time="2025-02-13T15:03:01.733636185Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:03:01.733866 containerd[1487]: time="2025-02-13T15:03:01.733849145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:03:01.733959 containerd[1487]: time="2025-02-13T15:03:01.733943305Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:03:01.733988 containerd[1487]: time="2025-02-13T15:03:01.733961745Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:03:01.733988 containerd[1487]: time="2025-02-13T15:03:01.733977385Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:03:01.734030 containerd[1487]: time="2025-02-13T15:03:01.733990785Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734030 containerd[1487]: time="2025-02-13T15:03:01.734004305Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734030 containerd[1487]: time="2025-02-13T15:03:01.734016225Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734076 containerd[1487]: time="2025-02-13T15:03:01.734030305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734076 containerd[1487]: time="2025-02-13T15:03:01.734048225Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734076 containerd[1487]: time="2025-02-13T15:03:01.734060985Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734076 containerd[1487]: time="2025-02-13T15:03:01.734071665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734140 containerd[1487]: time="2025-02-13T15:03:01.734082545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:03:01.734140 containerd[1487]: time="2025-02-13T15:03:01.734103025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734140 containerd[1487]: time="2025-02-13T15:03:01.734115385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734140 containerd[1487]: time="2025-02-13T15:03:01.734127185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734140 containerd[1487]: time="2025-02-13T15:03:01.734138145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734150705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734164185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734174865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734187705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734199585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734222 containerd[1487]: time="2025-02-13T15:03:01.734213305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734224505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734235985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734247385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734260585Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734279105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734291785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.734316 containerd[1487]: time="2025-02-13T15:03:01.734301825Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734490545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734509705Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734526985Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734539585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734550305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734561625Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734570465Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:03:01.736218 containerd[1487]: time="2025-02-13T15:03:01.734579865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.734894865Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.734937665Z" level=info msg="Connect containerd service" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.734962105Z" level=info msg="using legacy CRI server" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.734968665Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.735180865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.735782945Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.735936225Z" level=info msg="Start subscribing containerd event" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.735987145Z" level=info msg="Start recovering state" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736046985Z" level=info msg="Start event monitor" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736056545Z" level=info msg="Start snapshots syncer" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736065225Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736072745Z" level=info msg="Start streaming server" Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736221665Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736256585Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:03:01.736391 containerd[1487]: time="2025-02-13T15:03:01.736305145Z" level=info msg="containerd successfully booted in 0.038426s" Feb 13 15:03:01.736427 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:03:01.857153 tar[1483]: linux-arm64/LICENSE Feb 13 15:03:01.857153 tar[1483]: linux-arm64/README.md Feb 13 15:03:01.875366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:03:01.962164 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:03:01.981414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:03:01.988767 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:03:01.993587 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:03:01.993772 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:03:01.996401 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:03:02.009411 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:03:02.026686 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:03:02.028828 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:03:02.030097 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:03:02.409484 systemd-networkd[1411]: eth0: Gained IPv6LL Feb 13 15:03:02.411829 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:03:02.413641 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:03:02.429562 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:03:02.431963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:02.433972 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:03:02.446368 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:03:02.446735 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:03:02.448677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:03:02.453709 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:03:02.905063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:02.906804 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:03:02.908411 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:03:02.911506 systemd[1]: Startup finished in 539ms (kernel) + 5.549s (initrd) + 3.276s (userspace) = 9.365s. Feb 13 15:03:03.351447 kubelet[1571]: E0213 15:03:03.351379 1571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:03:03.354148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:03:03.354288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:03:03.354767 systemd[1]: kubelet.service: Consumed 796ms CPU time, 240.4M memory peak. Feb 13 15:03:06.793899 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:03:06.795040 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:39058.service - OpenSSH per-connection server daemon (10.0.0.1:39058). Feb 13 15:03:06.869604 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39058 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:06.871456 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:06.880925 systemd-logind[1472]: New session 1 of user core. Feb 13 15:03:06.881854 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:03:06.890535 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:03:06.899387 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:03:06.901265 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:03:06.907134 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:03:06.909111 systemd-logind[1472]: New session c1 of user core. Feb 13 15:03:07.003384 systemd[1589]: Queued start job for default target default.target. Feb 13 15:03:07.012260 systemd[1589]: Created slice app.slice - User Application Slice. Feb 13 15:03:07.012423 systemd[1589]: Reached target paths.target - Paths. Feb 13 15:03:07.012469 systemd[1589]: Reached target timers.target - Timers. Feb 13 15:03:07.013750 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:03:07.022094 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:03:07.022144 systemd[1589]: Reached target sockets.target - Sockets. Feb 13 15:03:07.022177 systemd[1589]: Reached target basic.target - Basic System. Feb 13 15:03:07.022210 systemd[1589]: Reached target default.target - Main User Target. Feb 13 15:03:07.022231 systemd[1589]: Startup finished in 108ms. Feb 13 15:03:07.022442 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:03:07.023738 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:03:07.094753 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:39074.service - OpenSSH per-connection server daemon (10.0.0.1:39074). Feb 13 15:03:07.133559 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 39074 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.134716 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.138494 systemd-logind[1472]: New session 2 of user core. Feb 13 15:03:07.150507 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:03:07.200155 sshd[1602]: Connection closed by 10.0.0.1 port 39074 Feb 13 15:03:07.200048 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:07.218221 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:39074.service: Deactivated successfully. Feb 13 15:03:07.221438 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:03:07.222027 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:03:07.235626 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:39076.service - OpenSSH per-connection server daemon (10.0.0.1:39076). Feb 13 15:03:07.236586 systemd-logind[1472]: Removed session 2. Feb 13 15:03:07.271531 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 39076 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.272576 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.276379 systemd-logind[1472]: New session 3 of user core. Feb 13 15:03:07.286450 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:03:07.332703 sshd[1610]: Connection closed by 10.0.0.1 port 39076 Feb 13 15:03:07.333077 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:07.343132 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:39076.service: Deactivated successfully. Feb 13 15:03:07.344453 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:03:07.346272 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:03:07.347552 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:39086.service - OpenSSH per-connection server daemon (10.0.0.1:39086). Feb 13 15:03:07.348198 systemd-logind[1472]: Removed session 3. Feb 13 15:03:07.386998 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 39086 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.388066 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.392273 systemd-logind[1472]: New session 4 of user core. Feb 13 15:03:07.403470 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:03:07.454087 sshd[1618]: Connection closed by 10.0.0.1 port 39086 Feb 13 15:03:07.454740 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:07.467219 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:39086.service: Deactivated successfully. Feb 13 15:03:07.468582 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:03:07.470450 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:03:07.483676 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:39092.service - OpenSSH per-connection server daemon (10.0.0.1:39092). Feb 13 15:03:07.484674 systemd-logind[1472]: Removed session 4. Feb 13 15:03:07.519109 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 39092 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.520225 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.524278 systemd-logind[1472]: New session 5 of user core. Feb 13 15:03:07.531462 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:03:07.592873 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:03:07.593155 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:03:07.610150 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 15:03:07.611851 sshd[1626]: Connection closed by 10.0.0.1 port 39092 Feb 13 15:03:07.612359 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:07.625381 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:39092.service: Deactivated successfully. Feb 13 15:03:07.626864 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:03:07.629470 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:03:07.640581 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:39102.service - OpenSSH per-connection server daemon (10.0.0.1:39102). Feb 13 15:03:07.641940 systemd-logind[1472]: Removed session 5. Feb 13 15:03:07.676974 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 39102 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.678194 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.682423 systemd-logind[1472]: New session 6 of user core. Feb 13 15:03:07.694544 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:03:07.745142 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:03:07.745445 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:03:07.748297 sudo[1637]: pam_unix(sudo:session): session closed for user root Feb 13 15:03:07.752533 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:03:07.752782 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:03:07.772653 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:03:07.794273 augenrules[1659]: No rules Feb 13 15:03:07.795711 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:03:07.797372 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:03:07.798277 sudo[1636]: pam_unix(sudo:session): session closed for user root Feb 13 15:03:07.799393 sshd[1635]: Connection closed by 10.0.0.1 port 39102 Feb 13 15:03:07.799764 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:07.811243 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:39102.service: Deactivated successfully. Feb 13 15:03:07.812668 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:03:07.813969 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:03:07.828604 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:39106.service - OpenSSH per-connection server daemon (10.0.0.1:39106). Feb 13 15:03:07.829523 systemd-logind[1472]: Removed session 6. Feb 13 15:03:07.865307 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 39106 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:07.866528 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:07.870616 systemd-logind[1472]: New session 7 of user core. Feb 13 15:03:07.881582 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:03:07.931928 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:03:07.932206 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:03:08.259567 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:03:08.259646 (dockerd)[1691]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:03:08.495619 dockerd[1691]: time="2025-02-13T15:03:08.495558705Z" level=info msg="Starting up" Feb 13 15:03:08.641215 dockerd[1691]: time="2025-02-13T15:03:08.641159225Z" level=info msg="Loading containers: start." Feb 13 15:03:08.786364 kernel: Initializing XFRM netlink socket Feb 13 15:03:08.859685 systemd-networkd[1411]: docker0: Link UP Feb 13 15:03:08.889573 dockerd[1691]: time="2025-02-13T15:03:08.889471865Z" level=info msg="Loading containers: done." Feb 13 15:03:08.906799 dockerd[1691]: time="2025-02-13T15:03:08.906696545Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:03:08.906935 dockerd[1691]: time="2025-02-13T15:03:08.906786785Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:03:08.907118 dockerd[1691]: time="2025-02-13T15:03:08.906961945Z" level=info msg="Daemon has completed initialization" Feb 13 15:03:08.932902 dockerd[1691]: time="2025-02-13T15:03:08.932778305Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:03:08.932938 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:03:09.650581 containerd[1487]: time="2025-02-13T15:03:09.650513185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:03:10.414353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193961647.mount: Deactivated successfully. Feb 13 15:03:11.562159 containerd[1487]: time="2025-02-13T15:03:11.562113425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:11.563117 containerd[1487]: time="2025-02-13T15:03:11.562930985Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 15:03:11.563960 containerd[1487]: time="2025-02-13T15:03:11.563720945Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:11.566857 containerd[1487]: time="2025-02-13T15:03:11.566815385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:11.567917 containerd[1487]: time="2025-02-13T15:03:11.567866945Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.917311s" Feb 13 15:03:11.567917 containerd[1487]: time="2025-02-13T15:03:11.567905225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:03:11.586205 containerd[1487]: time="2025-02-13T15:03:11.586173465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:03:13.061904 containerd[1487]: time="2025-02-13T15:03:13.061826345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:13.062291 containerd[1487]: time="2025-02-13T15:03:13.062230545Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 15:03:13.063216 containerd[1487]: time="2025-02-13T15:03:13.063177225Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:13.066695 containerd[1487]: time="2025-02-13T15:03:13.066657145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:13.068412 containerd[1487]: time="2025-02-13T15:03:13.068380705Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.4821714s" Feb 13 15:03:13.068445 containerd[1487]: time="2025-02-13T15:03:13.068411865Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:03:13.087158 containerd[1487]: time="2025-02-13T15:03:13.087120905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:03:13.595681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:03:13.605505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:13.692184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:13.695307 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:03:13.732466 kubelet[1973]: E0213 15:03:13.732415 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:03:13.735555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:03:13.735736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:03:13.736031 systemd[1]: kubelet.service: Consumed 125ms CPU time, 97.5M memory peak. Feb 13 15:03:14.263994 containerd[1487]: time="2025-02-13T15:03:14.263946665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:14.265027 containerd[1487]: time="2025-02-13T15:03:14.264768705Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 15:03:14.265736 containerd[1487]: time="2025-02-13T15:03:14.265697745Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:14.269416 containerd[1487]: time="2025-02-13T15:03:14.269378305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:14.271145 containerd[1487]: time="2025-02-13T15:03:14.271020185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.18386796s" Feb 13 15:03:14.271145 containerd[1487]: time="2025-02-13T15:03:14.271052345Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:03:14.290570 containerd[1487]: time="2025-02-13T15:03:14.290535585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:03:15.380471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910865843.mount: Deactivated successfully. Feb 13 15:03:15.571204 containerd[1487]: time="2025-02-13T15:03:15.571155745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:15.571861 containerd[1487]: time="2025-02-13T15:03:15.571823825Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 15:03:15.572642 containerd[1487]: time="2025-02-13T15:03:15.572597105Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:15.575016 containerd[1487]: time="2025-02-13T15:03:15.574958305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:15.575627 containerd[1487]: time="2025-02-13T15:03:15.575545545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.28496812s" Feb 13 15:03:15.575627 containerd[1487]: time="2025-02-13T15:03:15.575579385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:03:15.593400 containerd[1487]: time="2025-02-13T15:03:15.593366705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:03:16.226215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608529949.mount: Deactivated successfully. Feb 13 15:03:17.004554 containerd[1487]: time="2025-02-13T15:03:17.004493665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.005121 containerd[1487]: time="2025-02-13T15:03:17.005075905Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:03:17.005922 containerd[1487]: time="2025-02-13T15:03:17.005872185Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.009598 containerd[1487]: time="2025-02-13T15:03:17.009547785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.010290 containerd[1487]: time="2025-02-13T15:03:17.010260745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.4168634s" Feb 13 15:03:17.010356 containerd[1487]: time="2025-02-13T15:03:17.010289145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:03:17.028557 containerd[1487]: time="2025-02-13T15:03:17.028523185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:03:17.451743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157104469.mount: Deactivated successfully. Feb 13 15:03:17.456356 containerd[1487]: time="2025-02-13T15:03:17.456069625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.457182 containerd[1487]: time="2025-02-13T15:03:17.457131185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:03:17.458875 containerd[1487]: time="2025-02-13T15:03:17.458827465Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.461014 containerd[1487]: time="2025-02-13T15:03:17.460944345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:17.461969 containerd[1487]: time="2025-02-13T15:03:17.461808305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 433.24804ms" Feb 13 15:03:17.461969 containerd[1487]: time="2025-02-13T15:03:17.461833625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:03:17.479679 containerd[1487]: time="2025-02-13T15:03:17.479653665Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:03:18.623605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722311435.mount: Deactivated successfully. Feb 13 15:03:20.410933 containerd[1487]: time="2025-02-13T15:03:20.410881025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:20.411417 containerd[1487]: time="2025-02-13T15:03:20.411371425Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 15:03:20.412286 containerd[1487]: time="2025-02-13T15:03:20.412261385Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:20.415372 containerd[1487]: time="2025-02-13T15:03:20.415345465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:20.416602 containerd[1487]: time="2025-02-13T15:03:20.416573385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.93670928s" Feb 13 15:03:20.416645 containerd[1487]: time="2025-02-13T15:03:20.416602985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:03:23.845790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:03:23.855684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:23.938087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:23.941431 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:03:23.979107 kubelet[2197]: E0213 15:03:23.979027 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:03:23.980934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:03:23.981060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:03:23.981305 systemd[1]: kubelet.service: Consumed 120ms CPU time, 95.5M memory peak. Feb 13 15:03:24.056653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:24.056791 systemd[1]: kubelet.service: Consumed 120ms CPU time, 95.5M memory peak. Feb 13 15:03:24.067584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:24.081673 systemd[1]: Reload requested from client PID 2213 ('systemctl') (unit session-7.scope)... Feb 13 15:03:24.081688 systemd[1]: Reloading... Feb 13 15:03:24.143408 zram_generator::config[2257]: No configuration found. Feb 13 15:03:24.254844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:03:24.324550 systemd[1]: Reloading finished in 242 ms. Feb 13 15:03:24.360162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:24.362844 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:24.363450 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:03:24.363640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:24.363676 systemd[1]: kubelet.service: Consumed 74ms CPU time, 82.4M memory peak. Feb 13 15:03:24.364998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:24.448545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:24.452217 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:03:24.489624 kubelet[2304]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:03:24.489624 kubelet[2304]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:03:24.489624 kubelet[2304]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:03:24.489898 kubelet[2304]: I0213 15:03:24.489688 2304 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:03:25.172329 kubelet[2304]: I0213 15:03:25.172294 2304 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:03:25.172420 kubelet[2304]: I0213 15:03:25.172340 2304 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:03:25.173699 kubelet[2304]: I0213 15:03:25.173670 2304 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:03:25.203151 kubelet[2304]: I0213 15:03:25.203128 2304 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:03:25.203221 kubelet[2304]: E0213 15:03:25.203152 2304 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.210717 kubelet[2304]: I0213 15:03:25.210687 2304 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:03:25.211058 kubelet[2304]: I0213 15:03:25.211035 2304 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:03:25.211198 kubelet[2304]: I0213 15:03:25.211060 2304 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:03:25.211281 kubelet[2304]: I0213 15:03:25.211264 2304 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:03:25.211281 kubelet[2304]: I0213 15:03:25.211274 2304 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:03:25.211478 kubelet[2304]: I0213 15:03:25.211465 2304 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:03:25.213358 kubelet[2304]: I0213 15:03:25.212450 2304 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:03:25.213358 kubelet[2304]: I0213 15:03:25.212478 2304 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:03:25.213358 kubelet[2304]: I0213 15:03:25.212607 2304 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:03:25.213358 kubelet[2304]: I0213 15:03:25.212816 2304 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:03:25.213967 kubelet[2304]: W0213 15:03:25.213828 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.213967 kubelet[2304]: E0213 15:03:25.213876 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.213967 kubelet[2304]: W0213 15:03:25.213923 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.213967 kubelet[2304]: E0213 15:03:25.213946 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.216355 kubelet[2304]: I0213 15:03:25.216282 2304 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:03:25.216689 kubelet[2304]: I0213 15:03:25.216663 2304 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:03:25.216958 kubelet[2304]: W0213 15:03:25.216939 2304 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:03:25.217904 kubelet[2304]: I0213 15:03:25.217890 2304 server.go:1264] "Started kubelet" Feb 13 15:03:25.220384 kubelet[2304]: I0213 15:03:25.220334 2304 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:03:25.221987 kubelet[2304]: I0213 15:03:25.220690 2304 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:03:25.221987 kubelet[2304]: I0213 15:03:25.220725 2304 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:03:25.221987 kubelet[2304]: I0213 15:03:25.221103 2304 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:03:25.221987 kubelet[2304]: I0213 15:03:25.221955 2304 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:03:25.226351 kubelet[2304]: I0213 15:03:25.226292 2304 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:03:25.227023 kubelet[2304]: E0213 15:03:25.226847 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ccbfcb5d9f51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:03:25.217873745 +0000 UTC m=+0.762721161,LastTimestamp:2025-02-13 15:03:25.217873745 +0000 UTC m=+0.762721161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:03:25.227944 kubelet[2304]: I0213 15:03:25.227848 2304 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:03:25.228811 kubelet[2304]: I0213 15:03:25.228796 2304 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:03:25.229196 kubelet[2304]: W0213 15:03:25.229149 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.229298 kubelet[2304]: E0213 15:03:25.229286 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.229965 kubelet[2304]: I0213 15:03:25.229929 2304 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:03:25.230037 kubelet[2304]: I0213 15:03:25.230027 2304 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:03:25.234876 kubelet[2304]: E0213 15:03:25.234839 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Feb 13 15:03:25.234949 kubelet[2304]: I0213 15:03:25.234898 2304 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:03:25.244671 kubelet[2304]: I0213 15:03:25.244472 2304 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:03:25.244671 kubelet[2304]: I0213 15:03:25.244488 2304 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:03:25.244671 kubelet[2304]: I0213 15:03:25.244503 2304 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:03:25.245041 kubelet[2304]: I0213 15:03:25.245017 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:03:25.245945 kubelet[2304]: I0213 15:03:25.245916 2304 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:03:25.246077 kubelet[2304]: I0213 15:03:25.246066 2304 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:03:25.246100 kubelet[2304]: I0213 15:03:25.246083 2304 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:03:25.246129 kubelet[2304]: E0213 15:03:25.246115 2304 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:03:25.248084 kubelet[2304]: W0213 15:03:25.247992 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.248084 kubelet[2304]: E0213 15:03:25.248045 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:25.328569 kubelet[2304]: I0213 15:03:25.328515 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:25.328907 kubelet[2304]: E0213 15:03:25.328865 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 15:03:25.346532 kubelet[2304]: E0213 15:03:25.346496 2304 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:03:25.372777 kubelet[2304]: I0213 15:03:25.372749 2304 policy_none.go:49] "None policy: Start" Feb 13 15:03:25.373583 kubelet[2304]: I0213 15:03:25.373566 2304 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:03:25.373624 kubelet[2304]: I0213 15:03:25.373593 2304 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:03:25.384276 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:03:25.398597 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:03:25.401903 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:03:25.413216 kubelet[2304]: I0213 15:03:25.413186 2304 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:03:25.413475 kubelet[2304]: I0213 15:03:25.413430 2304 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:03:25.413581 kubelet[2304]: I0213 15:03:25.413564 2304 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:03:25.416035 kubelet[2304]: E0213 15:03:25.415745 2304 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:03:25.436338 kubelet[2304]: E0213 15:03:25.436220 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Feb 13 15:03:25.530629 kubelet[2304]: I0213 15:03:25.530606 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:25.530937 kubelet[2304]: E0213 15:03:25.530882 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 15:03:25.547086 kubelet[2304]: I0213 15:03:25.547022 2304 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:03:25.547838 kubelet[2304]: I0213 15:03:25.547816 2304 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:03:25.548704 kubelet[2304]: I0213 15:03:25.548671 2304 topology_manager.go:215] "Topology Admit Handler" podUID="95337c2e9e2cf38cdce1e5746f5941a9" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:03:25.553976 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:03:25.576421 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:03:25.590021 systemd[1]: Created slice kubepods-burstable-pod95337c2e9e2cf38cdce1e5746f5941a9.slice - libcontainer container kubepods-burstable-pod95337c2e9e2cf38cdce1e5746f5941a9.slice. Feb 13 15:03:25.630774 kubelet[2304]: I0213 15:03:25.630750 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:25.630859 kubelet[2304]: I0213 15:03:25.630780 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:03:25.630859 kubelet[2304]: I0213 15:03:25.630802 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:25.630859 kubelet[2304]: I0213 15:03:25.630825 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:25.630859 kubelet[2304]: I0213 15:03:25.630846 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:25.630941 kubelet[2304]: I0213 15:03:25.630865 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:25.630941 kubelet[2304]: I0213 15:03:25.630882 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:25.630941 kubelet[2304]: I0213 15:03:25.630898 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:25.630941 kubelet[2304]: I0213 15:03:25.630913 2304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:25.837347 kubelet[2304]: E0213 15:03:25.837219 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Feb 13 15:03:25.875259 containerd[1487]: time="2025-02-13T15:03:25.875195305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:25.889005 containerd[1487]: time="2025-02-13T15:03:25.888859745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:25.892739 containerd[1487]: time="2025-02-13T15:03:25.892689425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95337c2e9e2cf38cdce1e5746f5941a9,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:25.932141 kubelet[2304]: I0213 15:03:25.932072 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:25.932457 kubelet[2304]: E0213 15:03:25.932411 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 15:03:26.040411 kubelet[2304]: W0213 15:03:26.040347 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.040411 kubelet[2304]: E0213 15:03:26.040413 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.267569 kubelet[2304]: E0213 15:03:26.267441 2304 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ccbfcb5d9f51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:03:25.217873745 +0000 UTC m=+0.762721161,LastTimestamp:2025-02-13 15:03:25.217873745 +0000 UTC m=+0.762721161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:03:26.452812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724727922.mount: Deactivated successfully. Feb 13 15:03:26.458942 containerd[1487]: time="2025-02-13T15:03:26.458890065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:03:26.459946 containerd[1487]: time="2025-02-13T15:03:26.459903505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:03:26.461002 containerd[1487]: time="2025-02-13T15:03:26.460968745Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:03:26.463500 containerd[1487]: time="2025-02-13T15:03:26.463465225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:03:26.464193 containerd[1487]: time="2025-02-13T15:03:26.464133865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:03:26.464861 containerd[1487]: time="2025-02-13T15:03:26.464823385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:03:26.465571 containerd[1487]: time="2025-02-13T15:03:26.465537225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:03:26.466553 containerd[1487]: time="2025-02-13T15:03:26.466518985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.7696ms" Feb 13 15:03:26.466858 containerd[1487]: time="2025-02-13T15:03:26.466813665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:03:26.470045 containerd[1487]: time="2025-02-13T15:03:26.470014985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.74192ms" Feb 13 15:03:26.472055 containerd[1487]: time="2025-02-13T15:03:26.472008745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.07368ms" Feb 13 15:03:26.615934 containerd[1487]: time="2025-02-13T15:03:26.615822745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:26.615934 containerd[1487]: time="2025-02-13T15:03:26.615901025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:26.615934 containerd[1487]: time="2025-02-13T15:03:26.615918065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.616095 containerd[1487]: time="2025-02-13T15:03:26.615992105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.617052 containerd[1487]: time="2025-02-13T15:03:26.616651785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:26.617052 containerd[1487]: time="2025-02-13T15:03:26.616756665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:26.617052 containerd[1487]: time="2025-02-13T15:03:26.616767185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.617052 containerd[1487]: time="2025-02-13T15:03:26.616822665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.620612 containerd[1487]: time="2025-02-13T15:03:26.620300705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:26.620612 containerd[1487]: time="2025-02-13T15:03:26.620375985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:26.620612 containerd[1487]: time="2025-02-13T15:03:26.620391265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.620612 containerd[1487]: time="2025-02-13T15:03:26.620468105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:26.635498 systemd[1]: Started cri-containerd-f590b311bd62432f7b3add52df8bf0a7497edac29d0f996d938520ab97017a34.scope - libcontainer container f590b311bd62432f7b3add52df8bf0a7497edac29d0f996d938520ab97017a34. Feb 13 15:03:26.637709 kubelet[2304]: E0213 15:03:26.637675 2304 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="1.6s" Feb 13 15:03:26.639618 systemd[1]: Started cri-containerd-0cf21566926fce2ccd36b56288c11a68d04bf2961f0d7adac5e812d8ffaad6c3.scope - libcontainer container 0cf21566926fce2ccd36b56288c11a68d04bf2961f0d7adac5e812d8ffaad6c3. Feb 13 15:03:26.640732 systemd[1]: Started cri-containerd-fe5b5523a0ec44a02f60a85cddada041e58143b7fc542dac3813e2e84790f53b.scope - libcontainer container fe5b5523a0ec44a02f60a85cddada041e58143b7fc542dac3813e2e84790f53b. Feb 13 15:03:26.670129 containerd[1487]: time="2025-02-13T15:03:26.670086705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"f590b311bd62432f7b3add52df8bf0a7497edac29d0f996d938520ab97017a34\"" Feb 13 15:03:26.670905 containerd[1487]: time="2025-02-13T15:03:26.670876825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95337c2e9e2cf38cdce1e5746f5941a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf21566926fce2ccd36b56288c11a68d04bf2961f0d7adac5e812d8ffaad6c3\"" Feb 13 15:03:26.674546 containerd[1487]: time="2025-02-13T15:03:26.674513225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe5b5523a0ec44a02f60a85cddada041e58143b7fc542dac3813e2e84790f53b\"" Feb 13 15:03:26.677793 containerd[1487]: time="2025-02-13T15:03:26.677763465Z" level=info msg="CreateContainer within sandbox \"f590b311bd62432f7b3add52df8bf0a7497edac29d0f996d938520ab97017a34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:03:26.677933 containerd[1487]: time="2025-02-13T15:03:26.677906745Z" level=info msg="CreateContainer within sandbox \"0cf21566926fce2ccd36b56288c11a68d04bf2961f0d7adac5e812d8ffaad6c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:03:26.679011 kubelet[2304]: W0213 15:03:26.678975 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.679011 kubelet[2304]: E0213 15:03:26.679011 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.680362 containerd[1487]: time="2025-02-13T15:03:26.680212345Z" level=info msg="CreateContainer within sandbox \"fe5b5523a0ec44a02f60a85cddada041e58143b7fc542dac3813e2e84790f53b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:03:26.694314 containerd[1487]: time="2025-02-13T15:03:26.694268825Z" level=info msg="CreateContainer within sandbox \"f590b311bd62432f7b3add52df8bf0a7497edac29d0f996d938520ab97017a34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f04351ecfed8b1138fa6584175bc0dba754df84af5f9f76d1768a82c62e8b87d\"" Feb 13 15:03:26.695143 containerd[1487]: time="2025-02-13T15:03:26.695118825Z" level=info msg="StartContainer for \"f04351ecfed8b1138fa6584175bc0dba754df84af5f9f76d1768a82c62e8b87d\"" Feb 13 15:03:26.697661 kubelet[2304]: W0213 15:03:26.697579 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.697661 kubelet[2304]: E0213 15:03:26.697640 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.8:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.701673 containerd[1487]: time="2025-02-13T15:03:26.701598065Z" level=info msg="CreateContainer within sandbox \"fe5b5523a0ec44a02f60a85cddada041e58143b7fc542dac3813e2e84790f53b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03e91c667a3e492b14a80697d1a336299916ec043e6c5b0d2e548bd345b9a54c\"" Feb 13 15:03:26.702594 containerd[1487]: time="2025-02-13T15:03:26.702152665Z" level=info msg="StartContainer for \"03e91c667a3e492b14a80697d1a336299916ec043e6c5b0d2e548bd345b9a54c\"" Feb 13 15:03:26.702594 containerd[1487]: time="2025-02-13T15:03:26.702247105Z" level=info msg="CreateContainer within sandbox \"0cf21566926fce2ccd36b56288c11a68d04bf2961f0d7adac5e812d8ffaad6c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb9c39a6b4869833080826747fd6407d0a54e84fce6540f33e1d880a524c79ca\"" Feb 13 15:03:26.702685 containerd[1487]: time="2025-02-13T15:03:26.702615905Z" level=info msg="StartContainer for \"eb9c39a6b4869833080826747fd6407d0a54e84fce6540f33e1d880a524c79ca\"" Feb 13 15:03:26.724516 systemd[1]: Started cri-containerd-f04351ecfed8b1138fa6584175bc0dba754df84af5f9f76d1768a82c62e8b87d.scope - libcontainer container f04351ecfed8b1138fa6584175bc0dba754df84af5f9f76d1768a82c62e8b87d. Feb 13 15:03:26.728056 systemd[1]: Started cri-containerd-03e91c667a3e492b14a80697d1a336299916ec043e6c5b0d2e548bd345b9a54c.scope - libcontainer container 03e91c667a3e492b14a80697d1a336299916ec043e6c5b0d2e548bd345b9a54c. Feb 13 15:03:26.729263 systemd[1]: Started cri-containerd-eb9c39a6b4869833080826747fd6407d0a54e84fce6540f33e1d880a524c79ca.scope - libcontainer container eb9c39a6b4869833080826747fd6407d0a54e84fce6540f33e1d880a524c79ca. Feb 13 15:03:26.734184 kubelet[2304]: I0213 15:03:26.734152 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:26.735107 kubelet[2304]: E0213 15:03:26.735053 2304 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Feb 13 15:03:26.761700 containerd[1487]: time="2025-02-13T15:03:26.761310745Z" level=info msg="StartContainer for \"f04351ecfed8b1138fa6584175bc0dba754df84af5f9f76d1768a82c62e8b87d\" returns successfully" Feb 13 15:03:26.771623 containerd[1487]: time="2025-02-13T15:03:26.771237025Z" level=info msg="StartContainer for \"eb9c39a6b4869833080826747fd6407d0a54e84fce6540f33e1d880a524c79ca\" returns successfully" Feb 13 15:03:26.797361 containerd[1487]: time="2025-02-13T15:03:26.797313785Z" level=info msg="StartContainer for \"03e91c667a3e492b14a80697d1a336299916ec043e6c5b0d2e548bd345b9a54c\" returns successfully" Feb 13 15:03:26.814505 kubelet[2304]: W0213 15:03:26.814455 2304 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:26.814596 kubelet[2304]: E0213 15:03:26.814517 2304 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.8:6443: connect: connection refused Feb 13 15:03:28.241150 kubelet[2304]: E0213 15:03:28.241099 2304 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:03:28.337046 kubelet[2304]: I0213 15:03:28.336804 2304 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:28.349158 kubelet[2304]: I0213 15:03:28.349117 2304 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:03:28.358638 kubelet[2304]: E0213 15:03:28.358599 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:03:28.459495 kubelet[2304]: E0213 15:03:28.459459 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:03:28.559886 kubelet[2304]: E0213 15:03:28.559785 2304 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:03:29.216233 kubelet[2304]: I0213 15:03:29.216157 2304 apiserver.go:52] "Watching apiserver" Feb 13 15:03:29.228551 kubelet[2304]: I0213 15:03:29.228496 2304 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:03:29.899009 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Feb 13 15:03:29.899023 systemd[1]: Reloading... Feb 13 15:03:29.977355 zram_generator::config[2632]: No configuration found. Feb 13 15:03:30.053133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:03:30.136352 systemd[1]: Reloading finished in 237 ms. Feb 13 15:03:30.157699 kubelet[2304]: I0213 15:03:30.157282 2304 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:03:30.157527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:30.165061 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:03:30.165296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:30.165368 systemd[1]: kubelet.service: Consumed 1.094s CPU time, 116.4M memory peak. Feb 13 15:03:30.180607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:03:30.269707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:03:30.273887 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:03:30.316180 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:03:30.316180 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:03:30.316180 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:03:30.316538 kubelet[2671]: I0213 15:03:30.316218 2671 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:03:30.320369 kubelet[2671]: I0213 15:03:30.320024 2671 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:03:30.320369 kubelet[2671]: I0213 15:03:30.320056 2671 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:03:30.320369 kubelet[2671]: I0213 15:03:30.320207 2671 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:03:30.321579 kubelet[2671]: I0213 15:03:30.321555 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:03:30.323903 kubelet[2671]: I0213 15:03:30.323868 2671 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:03:30.328789 kubelet[2671]: I0213 15:03:30.328771 2671 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:03:30.328959 kubelet[2671]: I0213 15:03:30.328939 2671 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:03:30.329111 kubelet[2671]: I0213 15:03:30.328963 2671 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:03:30.329187 kubelet[2671]: I0213 15:03:30.329117 2671 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:03:30.329187 kubelet[2671]: I0213 15:03:30.329127 2671 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:03:30.329187 kubelet[2671]: I0213 15:03:30.329156 2671 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:03:30.329265 kubelet[2671]: I0213 15:03:30.329251 2671 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:03:30.329265 kubelet[2671]: I0213 15:03:30.329265 2671 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:03:30.329308 kubelet[2671]: I0213 15:03:30.329289 2671 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:03:30.329308 kubelet[2671]: I0213 15:03:30.329305 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:03:30.330187 kubelet[2671]: I0213 15:03:30.329999 2671 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:03:30.330187 kubelet[2671]: I0213 15:03:30.330168 2671 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.330542 2671 server.go:1264] "Started kubelet" Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.330653 2671 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.330795 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.330955 2671 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.332413 2671 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:03:30.332876 kubelet[2671]: I0213 15:03:30.332552 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:03:30.339291 kubelet[2671]: I0213 15:03:30.334882 2671 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:03:30.339291 kubelet[2671]: I0213 15:03:30.335020 2671 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:03:30.339291 kubelet[2671]: I0213 15:03:30.335149 2671 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:03:30.341641 kubelet[2671]: I0213 15:03:30.341106 2671 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:03:30.341641 kubelet[2671]: I0213 15:03:30.341229 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:03:30.354819 kubelet[2671]: I0213 15:03:30.354723 2671 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:03:30.355362 kubelet[2671]: I0213 15:03:30.355315 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:03:30.357160 kubelet[2671]: E0213 15:03:30.357134 2671 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:03:30.358658 kubelet[2671]: I0213 15:03:30.357746 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:03:30.358658 kubelet[2671]: I0213 15:03:30.357984 2671 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:03:30.358658 kubelet[2671]: I0213 15:03:30.358005 2671 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:03:30.358658 kubelet[2671]: E0213 15:03:30.358049 2671 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:03:30.388567 kubelet[2671]: I0213 15:03:30.388535 2671 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:03:30.388567 kubelet[2671]: I0213 15:03:30.388556 2671 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:03:30.388567 kubelet[2671]: I0213 15:03:30.388575 2671 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:03:30.388732 kubelet[2671]: I0213 15:03:30.388710 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:03:30.388732 kubelet[2671]: I0213 15:03:30.388721 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:03:30.388770 kubelet[2671]: I0213 15:03:30.388739 2671 policy_none.go:49] "None policy: Start" Feb 13 15:03:30.389795 kubelet[2671]: I0213 15:03:30.389550 2671 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:03:30.389795 kubelet[2671]: I0213 15:03:30.389577 2671 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:03:30.389795 kubelet[2671]: I0213 15:03:30.389705 2671 state_mem.go:75] "Updated machine memory state" Feb 13 15:03:30.395587 kubelet[2671]: I0213 15:03:30.395559 2671 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:03:30.395885 kubelet[2671]: I0213 15:03:30.395712 2671 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:03:30.395885 kubelet[2671]: I0213 15:03:30.395813 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:03:30.437358 kubelet[2671]: I0213 15:03:30.436569 2671 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:03:30.443144 kubelet[2671]: I0213 15:03:30.443006 2671 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:03:30.443144 kubelet[2671]: I0213 15:03:30.443073 2671 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:03:30.459034 kubelet[2671]: I0213 15:03:30.458989 2671 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:03:30.459138 kubelet[2671]: I0213 15:03:30.459121 2671 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:03:30.459170 kubelet[2671]: I0213 15:03:30.459159 2671 topology_manager.go:215] "Topology Admit Handler" podUID="95337c2e9e2cf38cdce1e5746f5941a9" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:03:30.464396 kubelet[2671]: E0213 15:03:30.464342 2671 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:30.535769 kubelet[2671]: I0213 15:03:30.535674 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:30.535769 kubelet[2671]: I0213 15:03:30.535706 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:30.535769 kubelet[2671]: I0213 15:03:30.535740 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:30.535916 kubelet[2671]: I0213 15:03:30.535781 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:30.535916 kubelet[2671]: I0213 15:03:30.535827 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:30.535916 kubelet[2671]: I0213 15:03:30.535859 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:03:30.535916 kubelet[2671]: I0213 15:03:30.535890 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:03:30.535916 kubelet[2671]: I0213 15:03:30.535908 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:30.536017 kubelet[2671]: I0213 15:03:30.535923 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95337c2e9e2cf38cdce1e5746f5941a9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95337c2e9e2cf38cdce1e5746f5941a9\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:03:30.905148 sudo[2704]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:03:30.905483 sudo[2704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:03:31.325556 sudo[2704]: pam_unix(sudo:session): session closed for user root Feb 13 15:03:31.330423 kubelet[2671]: I0213 15:03:31.330379 2671 apiserver.go:52] "Watching apiserver" Feb 13 15:03:31.335501 kubelet[2671]: I0213 15:03:31.335467 2671 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:03:31.380819 kubelet[2671]: E0213 15:03:31.380774 2671 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:03:31.395557 kubelet[2671]: I0213 15:03:31.395411 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.395383225 podStartE2EDuration="1.395383225s" podCreationTimestamp="2025-02-13 15:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:03:31.393427505 +0000 UTC m=+1.116116921" watchObservedRunningTime="2025-02-13 15:03:31.395383225 +0000 UTC m=+1.118072641" Feb 13 15:03:31.400835 kubelet[2671]: I0213 15:03:31.400684 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.400671945 podStartE2EDuration="2.400671945s" podCreationTimestamp="2025-02-13 15:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:03:31.400301425 +0000 UTC m=+1.122990881" watchObservedRunningTime="2025-02-13 15:03:31.400671945 +0000 UTC m=+1.123361401" Feb 13 15:03:31.415700 kubelet[2671]: I0213 15:03:31.415601 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.415589225 podStartE2EDuration="1.415589225s" podCreationTimestamp="2025-02-13 15:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:03:31.406568465 +0000 UTC m=+1.129257881" watchObservedRunningTime="2025-02-13 15:03:31.415589225 +0000 UTC m=+1.138278641" Feb 13 15:03:33.496356 sudo[1671]: pam_unix(sudo:session): session closed for user root Feb 13 15:03:33.497787 sshd[1670]: Connection closed by 10.0.0.1 port 39106 Feb 13 15:03:33.498377 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:33.502162 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:39106.service: Deactivated successfully. Feb 13 15:03:33.503919 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:03:33.504668 systemd[1]: session-7.scope: Consumed 6.507s CPU time, 288.4M memory peak. Feb 13 15:03:33.505855 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:03:33.506857 systemd-logind[1472]: Removed session 7. Feb 13 15:03:45.002269 kubelet[2671]: I0213 15:03:45.002231 2671 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:03:45.002704 containerd[1487]: time="2025-02-13T15:03:45.002621851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:03:45.002898 kubelet[2671]: I0213 15:03:45.002815 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:03:45.801307 kubelet[2671]: I0213 15:03:45.801222 2671 topology_manager.go:215] "Topology Admit Handler" podUID="d3be1793-60f4-47c2-92e3-0c5da8434949" podNamespace="kube-system" podName="kube-proxy-stp49" Feb 13 15:03:45.806331 kubelet[2671]: I0213 15:03:45.806119 2671 topology_manager.go:215] "Topology Admit Handler" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" podNamespace="kube-system" podName="cilium-gmz6d" Feb 13 15:03:45.815448 systemd[1]: Created slice kubepods-besteffort-podd3be1793_60f4_47c2_92e3_0c5da8434949.slice - libcontainer container kubepods-besteffort-podd3be1793_60f4_47c2_92e3_0c5da8434949.slice. Feb 13 15:03:45.830827 systemd[1]: Created slice kubepods-burstable-pod7e56a6ab_64bc_4095_af2a_0373950228a4.slice - libcontainer container kubepods-burstable-pod7e56a6ab_64bc_4095_af2a_0373950228a4.slice. Feb 13 15:03:45.858592 kubelet[2671]: I0213 15:03:45.858543 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-run\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858592 kubelet[2671]: I0213 15:03:45.858589 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cni-path\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858607 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3be1793-60f4-47c2-92e3-0c5da8434949-lib-modules\") pod \"kube-proxy-stp49\" (UID: \"d3be1793-60f4-47c2-92e3-0c5da8434949\") " pod="kube-system/kube-proxy-stp49" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858626 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-bpf-maps\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858642 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-hubble-tls\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858682 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3be1793-60f4-47c2-92e3-0c5da8434949-kube-proxy\") pod \"kube-proxy-stp49\" (UID: \"d3be1793-60f4-47c2-92e3-0c5da8434949\") " pod="kube-system/kube-proxy-stp49" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858715 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-xtables-lock\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858746 kubelet[2671]: I0213 15:03:45.858734 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-config-path\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858876 kubelet[2671]: I0213 15:03:45.858750 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e56a6ab-64bc-4095-af2a-0373950228a4-clustermesh-secrets\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858876 kubelet[2671]: I0213 15:03:45.858767 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-kernel\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858876 kubelet[2671]: I0213 15:03:45.858787 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmdt\" (UniqueName: \"kubernetes.io/projected/d3be1793-60f4-47c2-92e3-0c5da8434949-kube-api-access-dbmdt\") pod \"kube-proxy-stp49\" (UID: \"d3be1793-60f4-47c2-92e3-0c5da8434949\") " pod="kube-system/kube-proxy-stp49" Feb 13 15:03:45.858876 kubelet[2671]: I0213 15:03:45.858805 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3be1793-60f4-47c2-92e3-0c5da8434949-xtables-lock\") pod \"kube-proxy-stp49\" (UID: \"d3be1793-60f4-47c2-92e3-0c5da8434949\") " pod="kube-system/kube-proxy-stp49" Feb 13 15:03:45.858876 kubelet[2671]: I0213 15:03:45.858820 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-hostproc\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858972 kubelet[2671]: I0213 15:03:45.858836 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-etc-cni-netd\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858972 kubelet[2671]: I0213 15:03:45.858856 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smn2x\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-kube-api-access-smn2x\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858972 kubelet[2671]: I0213 15:03:45.858879 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-cgroup\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858972 kubelet[2671]: I0213 15:03:45.858895 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-lib-modules\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:45.858972 kubelet[2671]: I0213 15:03:45.858909 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-net\") pod \"cilium-gmz6d\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " pod="kube-system/cilium-gmz6d" Feb 13 15:03:46.044202 kubelet[2671]: I0213 15:03:46.044148 2671 topology_manager.go:215] "Topology Admit Handler" podUID="175ad923-a5cd-4d71-830c-9cb0bb41983b" podNamespace="kube-system" podName="cilium-operator-599987898-227jx" Feb 13 15:03:46.055982 systemd[1]: Created slice kubepods-besteffort-pod175ad923_a5cd_4d71_830c_9cb0bb41983b.slice - libcontainer container kubepods-besteffort-pod175ad923_a5cd_4d71_830c_9cb0bb41983b.slice. Feb 13 15:03:46.133179 containerd[1487]: time="2025-02-13T15:03:46.132826694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stp49,Uid:d3be1793-60f4-47c2-92e3-0c5da8434949,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:46.134499 containerd[1487]: time="2025-02-13T15:03:46.134335735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmz6d,Uid:7e56a6ab-64bc-4095-af2a-0373950228a4,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:46.158227 containerd[1487]: time="2025-02-13T15:03:46.158118555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:46.158227 containerd[1487]: time="2025-02-13T15:03:46.158195795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:46.158392 containerd[1487]: time="2025-02-13T15:03:46.158210395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.158392 containerd[1487]: time="2025-02-13T15:03:46.158281195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.161540 containerd[1487]: time="2025-02-13T15:03:46.161305558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:46.161540 containerd[1487]: time="2025-02-13T15:03:46.161381238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:46.161540 containerd[1487]: time="2025-02-13T15:03:46.161396558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.161540 containerd[1487]: time="2025-02-13T15:03:46.161490118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.162012 kubelet[2671]: I0213 15:03:46.161971 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ad923-a5cd-4d71-830c-9cb0bb41983b-cilium-config-path\") pod \"cilium-operator-599987898-227jx\" (UID: \"175ad923-a5cd-4d71-830c-9cb0bb41983b\") " pod="kube-system/cilium-operator-599987898-227jx" Feb 13 15:03:46.162012 kubelet[2671]: I0213 15:03:46.162011 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsk5\" (UniqueName: \"kubernetes.io/projected/175ad923-a5cd-4d71-830c-9cb0bb41983b-kube-api-access-jvsk5\") pod \"cilium-operator-599987898-227jx\" (UID: \"175ad923-a5cd-4d71-830c-9cb0bb41983b\") " pod="kube-system/cilium-operator-599987898-227jx" Feb 13 15:03:46.181513 systemd[1]: Started cri-containerd-c46eed43e7bd9fa3196ff09134c8f697b3ff4f2cf2ac34e3201f30033b51c73c.scope - libcontainer container c46eed43e7bd9fa3196ff09134c8f697b3ff4f2cf2ac34e3201f30033b51c73c. Feb 13 15:03:46.185121 systemd[1]: Started cri-containerd-fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a.scope - libcontainer container fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a. Feb 13 15:03:46.205381 containerd[1487]: time="2025-02-13T15:03:46.205275435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stp49,Uid:d3be1793-60f4-47c2-92e3-0c5da8434949,Namespace:kube-system,Attempt:0,} returns sandbox id \"c46eed43e7bd9fa3196ff09134c8f697b3ff4f2cf2ac34e3201f30033b51c73c\"" Feb 13 15:03:46.208642 containerd[1487]: time="2025-02-13T15:03:46.208614918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmz6d,Uid:7e56a6ab-64bc-4095-af2a-0373950228a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\"" Feb 13 15:03:46.214264 containerd[1487]: time="2025-02-13T15:03:46.214230482Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:03:46.223458 containerd[1487]: time="2025-02-13T15:03:46.223403930Z" level=info msg="CreateContainer within sandbox \"c46eed43e7bd9fa3196ff09134c8f697b3ff4f2cf2ac34e3201f30033b51c73c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:03:46.258435 containerd[1487]: time="2025-02-13T15:03:46.258385799Z" level=info msg="CreateContainer within sandbox \"c46eed43e7bd9fa3196ff09134c8f697b3ff4f2cf2ac34e3201f30033b51c73c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46c24f258d0c45a983a7d3b6f4ef15c182fc55815ba44a4bca8ec6cdfd7eb7c7\"" Feb 13 15:03:46.259116 containerd[1487]: time="2025-02-13T15:03:46.259048600Z" level=info msg="StartContainer for \"46c24f258d0c45a983a7d3b6f4ef15c182fc55815ba44a4bca8ec6cdfd7eb7c7\"" Feb 13 15:03:46.290487 systemd[1]: Started cri-containerd-46c24f258d0c45a983a7d3b6f4ef15c182fc55815ba44a4bca8ec6cdfd7eb7c7.scope - libcontainer container 46c24f258d0c45a983a7d3b6f4ef15c182fc55815ba44a4bca8ec6cdfd7eb7c7. Feb 13 15:03:46.317336 containerd[1487]: time="2025-02-13T15:03:46.317212289Z" level=info msg="StartContainer for \"46c24f258d0c45a983a7d3b6f4ef15c182fc55815ba44a4bca8ec6cdfd7eb7c7\" returns successfully" Feb 13 15:03:46.360837 containerd[1487]: time="2025-02-13T15:03:46.360505925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-227jx,Uid:175ad923-a5cd-4d71-830c-9cb0bb41983b,Namespace:kube-system,Attempt:0,}" Feb 13 15:03:46.381427 containerd[1487]: time="2025-02-13T15:03:46.381295502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:03:46.381561 containerd[1487]: time="2025-02-13T15:03:46.381389582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:03:46.381561 containerd[1487]: time="2025-02-13T15:03:46.381412342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.382789 containerd[1487]: time="2025-02-13T15:03:46.382717703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:03:46.406517 systemd[1]: Started cri-containerd-1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b.scope - libcontainer container 1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b. Feb 13 15:03:46.411678 kubelet[2671]: I0213 15:03:46.411621 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stp49" podStartSLOduration=1.411604128 podStartE2EDuration="1.411604128s" podCreationTimestamp="2025-02-13 15:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:03:46.411498208 +0000 UTC m=+16.134187624" watchObservedRunningTime="2025-02-13 15:03:46.411604128 +0000 UTC m=+16.134293544" Feb 13 15:03:46.450622 containerd[1487]: time="2025-02-13T15:03:46.450585680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-227jx,Uid:175ad923-a5cd-4d71-830c-9cb0bb41983b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\"" Feb 13 15:03:46.547155 update_engine[1474]: I20250213 15:03:46.547087 1474 update_attempter.cc:509] Updating boot flags... Feb 13 15:03:46.581386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2954) Feb 13 15:03:46.615402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2948) Feb 13 15:03:55.511061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257873903.mount: Deactivated successfully. Feb 13 15:03:56.730818 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:53156.service - OpenSSH per-connection server daemon (10.0.0.1:53156). Feb 13 15:03:56.922049 containerd[1487]: time="2025-02-13T15:03:56.921523604Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:56.922049 containerd[1487]: time="2025-02-13T15:03:56.922022644Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:03:56.923041 containerd[1487]: time="2025-02-13T15:03:56.923010805Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:56.925181 containerd[1487]: time="2025-02-13T15:03:56.925040526Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.710764444s" Feb 13 15:03:56.925181 containerd[1487]: time="2025-02-13T15:03:56.925086166Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:03:56.928701 containerd[1487]: time="2025-02-13T15:03:56.928502927Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:03:56.929286 containerd[1487]: time="2025-02-13T15:03:56.929245528Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:03:56.949684 sshd[3066]: Accepted publickey for core from 10.0.0.1 port 53156 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:03:56.951079 sshd-session[3066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:03:56.958154 containerd[1487]: time="2025-02-13T15:03:56.958112660Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\"" Feb 13 15:03:56.959018 containerd[1487]: time="2025-02-13T15:03:56.958985981Z" level=info msg="StartContainer for \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\"" Feb 13 15:03:56.959949 systemd-logind[1472]: New session 8 of user core. Feb 13 15:03:56.965597 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:03:56.989481 systemd[1]: Started cri-containerd-fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b.scope - libcontainer container fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b. Feb 13 15:03:57.085675 containerd[1487]: time="2025-02-13T15:03:57.085621114Z" level=info msg="StartContainer for \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\" returns successfully" Feb 13 15:03:57.106702 systemd[1]: cri-containerd-fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b.scope: Deactivated successfully. Feb 13 15:03:57.126346 sshd[3080]: Connection closed by 10.0.0.1 port 53156 Feb 13 15:03:57.126632 sshd-session[3066]: pam_unix(sshd:session): session closed for user core Feb 13 15:03:57.130500 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:53156.service: Deactivated successfully. Feb 13 15:03:57.132195 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:03:57.133579 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:03:57.134737 systemd-logind[1472]: Removed session 8. Feb 13 15:03:57.268501 containerd[1487]: time="2025-02-13T15:03:57.260929706Z" level=info msg="shim disconnected" id=fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b namespace=k8s.io Feb 13 15:03:57.268501 containerd[1487]: time="2025-02-13T15:03:57.268430029Z" level=warning msg="cleaning up after shim disconnected" id=fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b namespace=k8s.io Feb 13 15:03:57.268501 containerd[1487]: time="2025-02-13T15:03:57.268442749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:03:57.447049 containerd[1487]: time="2025-02-13T15:03:57.446902503Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:03:57.468297 containerd[1487]: time="2025-02-13T15:03:57.468245272Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\"" Feb 13 15:03:57.469985 containerd[1487]: time="2025-02-13T15:03:57.469944832Z" level=info msg="StartContainer for \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\"" Feb 13 15:03:57.495477 systemd[1]: Started cri-containerd-384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be.scope - libcontainer container 384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be. Feb 13 15:03:57.515480 containerd[1487]: time="2025-02-13T15:03:57.515441011Z" level=info msg="StartContainer for \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\" returns successfully" Feb 13 15:03:57.533555 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:03:57.533779 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:03:57.534444 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:03:57.544933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:03:57.545677 systemd[1]: cri-containerd-384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be.scope: Deactivated successfully. Feb 13 15:03:57.557509 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:03:57.573960 containerd[1487]: time="2025-02-13T15:03:57.573905555Z" level=info msg="shim disconnected" id=384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be namespace=k8s.io Feb 13 15:03:57.573960 containerd[1487]: time="2025-02-13T15:03:57.573958875Z" level=warning msg="cleaning up after shim disconnected" id=384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be namespace=k8s.io Feb 13 15:03:57.573960 containerd[1487]: time="2025-02-13T15:03:57.573968195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:03:57.953509 systemd[1]: run-containerd-runc-k8s.io-fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b-runc.yNz5iK.mount: Deactivated successfully. Feb 13 15:03:57.953604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b-rootfs.mount: Deactivated successfully. Feb 13 15:03:58.197075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963552729.mount: Deactivated successfully. Feb 13 15:03:58.446958 containerd[1487]: time="2025-02-13T15:03:58.446909223Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:03:58.473286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400383377.mount: Deactivated successfully. Feb 13 15:03:58.475935 containerd[1487]: time="2025-02-13T15:03:58.475885835Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\"" Feb 13 15:03:58.476481 containerd[1487]: time="2025-02-13T15:03:58.476445995Z" level=info msg="StartContainer for \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\"" Feb 13 15:03:58.504484 systemd[1]: Started cri-containerd-43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a.scope - libcontainer container 43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a. Feb 13 15:03:58.531442 containerd[1487]: time="2025-02-13T15:03:58.531230976Z" level=info msg="StartContainer for \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\" returns successfully" Feb 13 15:03:58.560908 systemd[1]: cri-containerd-43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a.scope: Deactivated successfully. Feb 13 15:03:58.598055 containerd[1487]: time="2025-02-13T15:03:58.597778562Z" level=info msg="shim disconnected" id=43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a namespace=k8s.io Feb 13 15:03:58.598055 containerd[1487]: time="2025-02-13T15:03:58.597842682Z" level=warning msg="cleaning up after shim disconnected" id=43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a namespace=k8s.io Feb 13 15:03:58.598055 containerd[1487]: time="2025-02-13T15:03:58.597853042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:03:59.034649 containerd[1487]: time="2025-02-13T15:03:59.034605370Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:59.035061 containerd[1487]: time="2025-02-13T15:03:59.035017570Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:03:59.036004 containerd[1487]: time="2025-02-13T15:03:59.035970130Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:03:59.037367 containerd[1487]: time="2025-02-13T15:03:59.037335291Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.108783644s" Feb 13 15:03:59.037418 containerd[1487]: time="2025-02-13T15:03:59.037366931Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:03:59.039375 containerd[1487]: time="2025-02-13T15:03:59.039344611Z" level=info msg="CreateContainer within sandbox \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:03:59.049793 containerd[1487]: time="2025-02-13T15:03:59.049749135Z" level=info msg="CreateContainer within sandbox \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\"" Feb 13 15:03:59.051765 containerd[1487]: time="2025-02-13T15:03:59.050402095Z" level=info msg="StartContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\"" Feb 13 15:03:59.085552 systemd[1]: Started cri-containerd-2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862.scope - libcontainer container 2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862. Feb 13 15:03:59.112134 containerd[1487]: time="2025-02-13T15:03:59.112082598Z" level=info msg="StartContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" returns successfully" Feb 13 15:03:59.455404 containerd[1487]: time="2025-02-13T15:03:59.455356362Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:03:59.481559 containerd[1487]: time="2025-02-13T15:03:59.481366291Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\"" Feb 13 15:03:59.482430 containerd[1487]: time="2025-02-13T15:03:59.482396612Z" level=info msg="StartContainer for \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\"" Feb 13 15:03:59.514561 systemd[1]: Started cri-containerd-908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5.scope - libcontainer container 908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5. Feb 13 15:03:59.534550 systemd[1]: cri-containerd-908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5.scope: Deactivated successfully. Feb 13 15:03:59.538466 containerd[1487]: time="2025-02-13T15:03:59.538241272Z" level=info msg="StartContainer for \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\" returns successfully" Feb 13 15:03:59.632587 containerd[1487]: time="2025-02-13T15:03:59.632511026Z" level=info msg="shim disconnected" id=908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5 namespace=k8s.io Feb 13 15:03:59.632587 containerd[1487]: time="2025-02-13T15:03:59.632582506Z" level=warning msg="cleaning up after shim disconnected" id=908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5 namespace=k8s.io Feb 13 15:03:59.632587 containerd[1487]: time="2025-02-13T15:03:59.632593626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:00.469089 containerd[1487]: time="2025-02-13T15:04:00.468998038Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:04:00.482585 kubelet[2671]: I0213 15:04:00.479003 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-227jx" podStartSLOduration=1.893484793 podStartE2EDuration="14.478987002s" podCreationTimestamp="2025-02-13 15:03:46 +0000 UTC" firstStartedPulling="2025-02-13 15:03:46.452642842 +0000 UTC m=+16.175332258" lastFinishedPulling="2025-02-13 15:03:59.038145051 +0000 UTC m=+28.760834467" observedRunningTime="2025-02-13 15:03:59.482015092 +0000 UTC m=+29.204704508" watchObservedRunningTime="2025-02-13 15:04:00.478987002 +0000 UTC m=+30.201676418" Feb 13 15:04:00.487285 containerd[1487]: time="2025-02-13T15:04:00.487237885Z" level=info msg="CreateContainer within sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\"" Feb 13 15:04:00.488022 containerd[1487]: time="2025-02-13T15:04:00.487996925Z" level=info msg="StartContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\"" Feb 13 15:04:00.510482 systemd[1]: Started cri-containerd-4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574.scope - libcontainer container 4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574. Feb 13 15:04:00.537500 containerd[1487]: time="2025-02-13T15:04:00.535939101Z" level=info msg="StartContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" returns successfully" Feb 13 15:04:00.675763 kubelet[2671]: I0213 15:04:00.675728 2671 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:04:00.699005 kubelet[2671]: I0213 15:04:00.698799 2671 topology_manager.go:215] "Topology Admit Handler" podUID="1fbfb117-35a6-4dde-8142-43f6b1101dec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g8hdl" Feb 13 15:04:00.699659 kubelet[2671]: I0213 15:04:00.699417 2671 topology_manager.go:215] "Topology Admit Handler" podUID="86d348fb-f320-479c-bdfb-143093f227ad" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qg5q4" Feb 13 15:04:00.709905 systemd[1]: Created slice kubepods-burstable-pod1fbfb117_35a6_4dde_8142_43f6b1101dec.slice - libcontainer container kubepods-burstable-pod1fbfb117_35a6_4dde_8142_43f6b1101dec.slice. Feb 13 15:04:00.717429 systemd[1]: Created slice kubepods-burstable-pod86d348fb_f320_479c_bdfb_143093f227ad.slice - libcontainer container kubepods-burstable-pod86d348fb_f320_479c_bdfb_143093f227ad.slice. Feb 13 15:04:00.865481 kubelet[2671]: I0213 15:04:00.865443 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86d348fb-f320-479c-bdfb-143093f227ad-config-volume\") pod \"coredns-7db6d8ff4d-qg5q4\" (UID: \"86d348fb-f320-479c-bdfb-143093f227ad\") " pod="kube-system/coredns-7db6d8ff4d-qg5q4" Feb 13 15:04:00.865603 kubelet[2671]: I0213 15:04:00.865487 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcp7x\" (UniqueName: \"kubernetes.io/projected/1fbfb117-35a6-4dde-8142-43f6b1101dec-kube-api-access-gcp7x\") pod \"coredns-7db6d8ff4d-g8hdl\" (UID: \"1fbfb117-35a6-4dde-8142-43f6b1101dec\") " pod="kube-system/coredns-7db6d8ff4d-g8hdl" Feb 13 15:04:00.865603 kubelet[2671]: I0213 15:04:00.865513 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fbfb117-35a6-4dde-8142-43f6b1101dec-config-volume\") pod \"coredns-7db6d8ff4d-g8hdl\" (UID: \"1fbfb117-35a6-4dde-8142-43f6b1101dec\") " pod="kube-system/coredns-7db6d8ff4d-g8hdl" Feb 13 15:04:00.865603 kubelet[2671]: I0213 15:04:00.865533 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb5p\" (UniqueName: \"kubernetes.io/projected/86d348fb-f320-479c-bdfb-143093f227ad-kube-api-access-dnb5p\") pod \"coredns-7db6d8ff4d-qg5q4\" (UID: \"86d348fb-f320-479c-bdfb-143093f227ad\") " pod="kube-system/coredns-7db6d8ff4d-qg5q4" Feb 13 15:04:01.016077 containerd[1487]: time="2025-02-13T15:04:01.016036824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g8hdl,Uid:1fbfb117-35a6-4dde-8142-43f6b1101dec,Namespace:kube-system,Attempt:0,}" Feb 13 15:04:01.021356 containerd[1487]: time="2025-02-13T15:04:01.020908185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qg5q4,Uid:86d348fb-f320-479c-bdfb-143093f227ad,Namespace:kube-system,Attempt:0,}" Feb 13 15:04:01.478859 kubelet[2671]: I0213 15:04:01.478803 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gmz6d" podStartSLOduration=5.765152846 podStartE2EDuration="16.478765611s" podCreationTimestamp="2025-02-13 15:03:45 +0000 UTC" firstStartedPulling="2025-02-13 15:03:46.213729562 +0000 UTC m=+15.936418978" lastFinishedPulling="2025-02-13 15:03:56.927342327 +0000 UTC m=+26.650031743" observedRunningTime="2025-02-13 15:04:01.477305491 +0000 UTC m=+31.199994907" watchObservedRunningTime="2025-02-13 15:04:01.478765611 +0000 UTC m=+31.201455027" Feb 13 15:04:02.141903 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:53162.service - OpenSSH per-connection server daemon (10.0.0.1:53162). Feb 13 15:04:02.187125 sshd[3533]: Accepted publickey for core from 10.0.0.1 port 53162 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:02.187640 sshd-session[3533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:02.191762 systemd-logind[1472]: New session 9 of user core. Feb 13 15:04:02.202480 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:04:02.334935 sshd[3535]: Connection closed by 10.0.0.1 port 53162 Feb 13 15:04:02.335484 sshd-session[3533]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:02.341495 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:53162.service: Deactivated successfully. Feb 13 15:04:02.344924 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:04:02.349818 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:04:02.354721 systemd-logind[1472]: Removed session 9. Feb 13 15:04:02.767402 systemd-networkd[1411]: cilium_host: Link UP Feb 13 15:04:02.767544 systemd-networkd[1411]: cilium_net: Link UP Feb 13 15:04:02.767546 systemd-networkd[1411]: cilium_net: Gained carrier Feb 13 15:04:02.767709 systemd-networkd[1411]: cilium_host: Gained carrier Feb 13 15:04:02.855572 systemd-networkd[1411]: cilium_vxlan: Link UP Feb 13 15:04:02.855579 systemd-networkd[1411]: cilium_vxlan: Gained carrier Feb 13 15:04:02.993495 systemd-networkd[1411]: cilium_net: Gained IPv6LL Feb 13 15:04:03.147373 kernel: NET: Registered PF_ALG protocol family Feb 13 15:04:03.530812 systemd-networkd[1411]: cilium_host: Gained IPv6LL Feb 13 15:04:03.723231 systemd-networkd[1411]: lxc_health: Link UP Feb 13 15:04:03.724270 systemd-networkd[1411]: lxc_health: Gained carrier Feb 13 15:04:04.041507 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Feb 13 15:04:04.183784 systemd-networkd[1411]: lxcfad2764293d9: Link UP Feb 13 15:04:04.192354 kernel: eth0: renamed from tmp8f802 Feb 13 15:04:04.202100 systemd-networkd[1411]: lxc8bcf69a102a0: Link UP Feb 13 15:04:04.208680 systemd-networkd[1411]: lxcfad2764293d9: Gained carrier Feb 13 15:04:04.210446 kernel: eth0: renamed from tmpda4ab Feb 13 15:04:04.220292 systemd-networkd[1411]: lxc8bcf69a102a0: Gained carrier Feb 13 15:04:04.746457 systemd-networkd[1411]: lxc_health: Gained IPv6LL Feb 13 15:04:05.258469 systemd-networkd[1411]: lxcfad2764293d9: Gained IPv6LL Feb 13 15:04:05.578464 systemd-networkd[1411]: lxc8bcf69a102a0: Gained IPv6LL Feb 13 15:04:07.348790 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:44166.service - OpenSSH per-connection server daemon (10.0.0.1:44166). Feb 13 15:04:07.394483 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 44166 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:07.395737 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:07.402021 systemd-logind[1472]: New session 10 of user core. Feb 13 15:04:07.412512 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:04:07.535929 sshd[3930]: Connection closed by 10.0.0.1 port 44166 Feb 13 15:04:07.536677 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:07.540224 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:44166.service: Deactivated successfully. Feb 13 15:04:07.542239 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:04:07.543081 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:04:07.543919 systemd-logind[1472]: Removed session 10. Feb 13 15:04:07.730769 containerd[1487]: time="2025-02-13T15:04:07.729755772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:04:07.730769 containerd[1487]: time="2025-02-13T15:04:07.730287212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:04:07.730769 containerd[1487]: time="2025-02-13T15:04:07.730299412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:07.733052 containerd[1487]: time="2025-02-13T15:04:07.732707572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:07.739106 containerd[1487]: time="2025-02-13T15:04:07.735478253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:04:07.739106 containerd[1487]: time="2025-02-13T15:04:07.736811773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:04:07.739106 containerd[1487]: time="2025-02-13T15:04:07.736832493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:07.739106 containerd[1487]: time="2025-02-13T15:04:07.736900813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:07.763510 systemd[1]: Started cri-containerd-8f8020214759f2b0d9e49341bb2f58c2e54b8f3c04302aa28453a20eec5d550b.scope - libcontainer container 8f8020214759f2b0d9e49341bb2f58c2e54b8f3c04302aa28453a20eec5d550b. Feb 13 15:04:07.764644 systemd[1]: Started cri-containerd-da4abb36382a593b1d91015d31d384e3b2de3e2637f1d31499f6c7734b74f70f.scope - libcontainer container da4abb36382a593b1d91015d31d384e3b2de3e2637f1d31499f6c7734b74f70f. Feb 13 15:04:07.775585 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:04:07.778569 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:04:07.797052 containerd[1487]: time="2025-02-13T15:04:07.797009226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g8hdl,Uid:1fbfb117-35a6-4dde-8142-43f6b1101dec,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f8020214759f2b0d9e49341bb2f58c2e54b8f3c04302aa28453a20eec5d550b\"" Feb 13 15:04:07.797209 containerd[1487]: time="2025-02-13T15:04:07.797021066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qg5q4,Uid:86d348fb-f320-479c-bdfb-143093f227ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"da4abb36382a593b1d91015d31d384e3b2de3e2637f1d31499f6c7734b74f70f\"" Feb 13 15:04:07.799892 containerd[1487]: time="2025-02-13T15:04:07.799860547Z" level=info msg="CreateContainer within sandbox \"8f8020214759f2b0d9e49341bb2f58c2e54b8f3c04302aa28453a20eec5d550b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:04:07.800800 containerd[1487]: time="2025-02-13T15:04:07.800761467Z" level=info msg="CreateContainer within sandbox \"da4abb36382a593b1d91015d31d384e3b2de3e2637f1d31499f6c7734b74f70f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:04:07.812910 containerd[1487]: time="2025-02-13T15:04:07.812877230Z" level=info msg="CreateContainer within sandbox \"8f8020214759f2b0d9e49341bb2f58c2e54b8f3c04302aa28453a20eec5d550b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"828ae232daa0315b4b89994c8e02f1ac45306e0fef0924850c5f63baa06073b9\"" Feb 13 15:04:07.813551 containerd[1487]: time="2025-02-13T15:04:07.813525590Z" level=info msg="StartContainer for \"828ae232daa0315b4b89994c8e02f1ac45306e0fef0924850c5f63baa06073b9\"" Feb 13 15:04:07.815534 containerd[1487]: time="2025-02-13T15:04:07.815483070Z" level=info msg="CreateContainer within sandbox \"da4abb36382a593b1d91015d31d384e3b2de3e2637f1d31499f6c7734b74f70f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a3010e8a39ce814b2b81b2caa5b6b45291d723d84ccd101de2d56d64b5f73fc\"" Feb 13 15:04:07.816351 containerd[1487]: time="2025-02-13T15:04:07.816329150Z" level=info msg="StartContainer for \"6a3010e8a39ce814b2b81b2caa5b6b45291d723d84ccd101de2d56d64b5f73fc\"" Feb 13 15:04:07.841491 systemd[1]: Started cri-containerd-828ae232daa0315b4b89994c8e02f1ac45306e0fef0924850c5f63baa06073b9.scope - libcontainer container 828ae232daa0315b4b89994c8e02f1ac45306e0fef0924850c5f63baa06073b9. Feb 13 15:04:07.843685 systemd[1]: Started cri-containerd-6a3010e8a39ce814b2b81b2caa5b6b45291d723d84ccd101de2d56d64b5f73fc.scope - libcontainer container 6a3010e8a39ce814b2b81b2caa5b6b45291d723d84ccd101de2d56d64b5f73fc. Feb 13 15:04:07.876950 containerd[1487]: time="2025-02-13T15:04:07.876846483Z" level=info msg="StartContainer for \"6a3010e8a39ce814b2b81b2caa5b6b45291d723d84ccd101de2d56d64b5f73fc\" returns successfully" Feb 13 15:04:07.876950 containerd[1487]: time="2025-02-13T15:04:07.876846763Z" level=info msg="StartContainer for \"828ae232daa0315b4b89994c8e02f1ac45306e0fef0924850c5f63baa06073b9\" returns successfully" Feb 13 15:04:08.497096 kubelet[2671]: I0213 15:04:08.497039 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g8hdl" podStartSLOduration=22.497022691 podStartE2EDuration="22.497022691s" podCreationTimestamp="2025-02-13 15:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:04:08.488691369 +0000 UTC m=+38.211380785" watchObservedRunningTime="2025-02-13 15:04:08.497022691 +0000 UTC m=+38.219712107" Feb 13 15:04:11.383312 kubelet[2671]: I0213 15:04:11.383175 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:04:11.396350 kubelet[2671]: I0213 15:04:11.395952 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qg5q4" podStartSLOduration=25.395928787 podStartE2EDuration="25.395928787s" podCreationTimestamp="2025-02-13 15:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:04:08.514514294 +0000 UTC m=+38.237203670" watchObservedRunningTime="2025-02-13 15:04:11.395928787 +0000 UTC m=+41.118618203" Feb 13 15:04:12.550345 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:55912.service - OpenSSH per-connection server daemon (10.0.0.1:55912). Feb 13 15:04:12.597343 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 55912 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:12.598231 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:12.604391 systemd-logind[1472]: New session 11 of user core. Feb 13 15:04:12.613487 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:04:12.734166 sshd[4124]: Connection closed by 10.0.0.1 port 55912 Feb 13 15:04:12.734651 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:12.747313 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:55912.service: Deactivated successfully. Feb 13 15:04:12.750540 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:04:12.751713 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:04:12.752731 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:55926.service - OpenSSH per-connection server daemon (10.0.0.1:55926). Feb 13 15:04:12.753267 systemd-logind[1472]: Removed session 11. Feb 13 15:04:12.792093 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 55926 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:12.793215 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:12.796842 systemd-logind[1472]: New session 12 of user core. Feb 13 15:04:12.804444 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:04:12.942813 sshd[4140]: Connection closed by 10.0.0.1 port 55926 Feb 13 15:04:12.943383 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:12.952375 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:55926.service: Deactivated successfully. Feb 13 15:04:12.955184 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:04:12.957861 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:04:12.968945 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:55928.service - OpenSSH per-connection server daemon (10.0.0.1:55928). Feb 13 15:04:12.970581 systemd-logind[1472]: Removed session 12. Feb 13 15:04:13.011054 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 55928 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:13.012380 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:13.016536 systemd-logind[1472]: New session 13 of user core. Feb 13 15:04:13.023457 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:04:13.133222 sshd[4154]: Connection closed by 10.0.0.1 port 55928 Feb 13 15:04:13.133572 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:13.136265 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:55928.service: Deactivated successfully. Feb 13 15:04:13.138163 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:04:13.139412 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:04:13.140198 systemd-logind[1472]: Removed session 13. Feb 13 15:04:18.144206 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:55930.service - OpenSSH per-connection server daemon (10.0.0.1:55930). Feb 13 15:04:18.184646 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 55930 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:18.186059 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:18.189625 systemd-logind[1472]: New session 14 of user core. Feb 13 15:04:18.196482 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:04:18.303410 sshd[4174]: Connection closed by 10.0.0.1 port 55930 Feb 13 15:04:18.304116 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:18.310136 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:55930.service: Deactivated successfully. Feb 13 15:04:18.311679 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:04:18.313502 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:04:18.314477 systemd-logind[1472]: Removed session 14. Feb 13 15:04:23.318888 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:45958.service - OpenSSH per-connection server daemon (10.0.0.1:45958). Feb 13 15:04:23.358900 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 45958 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:23.360136 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:23.364715 systemd-logind[1472]: New session 15 of user core. Feb 13 15:04:23.371499 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:04:23.478645 sshd[4189]: Connection closed by 10.0.0.1 port 45958 Feb 13 15:04:23.479135 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:23.489415 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:45958.service: Deactivated successfully. Feb 13 15:04:23.491027 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:04:23.492467 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:04:23.494267 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:45962.service - OpenSSH per-connection server daemon (10.0.0.1:45962). Feb 13 15:04:23.495250 systemd-logind[1472]: Removed session 15. Feb 13 15:04:23.533571 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 45962 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:23.534759 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:23.539179 systemd-logind[1472]: New session 16 of user core. Feb 13 15:04:23.547494 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:04:23.790190 sshd[4205]: Connection closed by 10.0.0.1 port 45962 Feb 13 15:04:23.790776 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:23.808474 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:45962.service: Deactivated successfully. Feb 13 15:04:23.809997 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:04:23.810650 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:04:23.813546 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:45976.service - OpenSSH per-connection server daemon (10.0.0.1:45976). Feb 13 15:04:23.814892 systemd-logind[1472]: Removed session 16. Feb 13 15:04:23.863030 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 45976 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:23.864256 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:23.868371 systemd-logind[1472]: New session 17 of user core. Feb 13 15:04:23.880562 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:04:25.127440 sshd[4218]: Connection closed by 10.0.0.1 port 45976 Feb 13 15:04:25.127943 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:25.136951 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:45976.service: Deactivated successfully. Feb 13 15:04:25.139513 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:04:25.141795 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:04:25.155640 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:45978.service - OpenSSH per-connection server daemon (10.0.0.1:45978). Feb 13 15:04:25.158091 systemd-logind[1472]: Removed session 17. Feb 13 15:04:25.193858 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 45978 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:25.194954 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:25.199374 systemd-logind[1472]: New session 18 of user core. Feb 13 15:04:25.218508 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:04:25.426474 sshd[4242]: Connection closed by 10.0.0.1 port 45978 Feb 13 15:04:25.426736 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:25.439982 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:45978.service: Deactivated successfully. Feb 13 15:04:25.441684 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:04:25.442767 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:04:25.444054 systemd-logind[1472]: Removed session 18. Feb 13 15:04:25.459167 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:45982.service - OpenSSH per-connection server daemon (10.0.0.1:45982). Feb 13 15:04:25.495238 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 45982 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:25.499464 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:25.504476 systemd-logind[1472]: New session 19 of user core. Feb 13 15:04:25.514478 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:04:25.620267 sshd[4256]: Connection closed by 10.0.0.1 port 45982 Feb 13 15:04:25.620621 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:25.623725 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:45982.service: Deactivated successfully. Feb 13 15:04:25.625737 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:04:25.627981 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:04:25.628924 systemd-logind[1472]: Removed session 19. Feb 13 15:04:30.632626 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:45996.service - OpenSSH per-connection server daemon (10.0.0.1:45996). Feb 13 15:04:30.684350 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 45996 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:30.685499 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:30.689858 systemd-logind[1472]: New session 20 of user core. Feb 13 15:04:30.701544 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:04:30.818861 sshd[4277]: Connection closed by 10.0.0.1 port 45996 Feb 13 15:04:30.819417 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:30.822744 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:45996.service: Deactivated successfully. Feb 13 15:04:30.824375 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:04:30.825532 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:04:30.826705 systemd-logind[1472]: Removed session 20. Feb 13 15:04:35.836167 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Feb 13 15:04:35.909567 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:35.910784 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:35.915430 systemd-logind[1472]: New session 21 of user core. Feb 13 15:04:35.926510 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:04:36.036222 sshd[4292]: Connection closed by 10.0.0.1 port 57996 Feb 13 15:04:36.036692 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:36.040283 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:57996.service: Deactivated successfully. Feb 13 15:04:36.041983 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:04:36.043355 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:04:36.044194 systemd-logind[1472]: Removed session 21. Feb 13 15:04:41.048498 systemd[1]: Started sshd@21-10.0.0.8:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Feb 13 15:04:41.088686 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:41.089919 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:41.093536 systemd-logind[1472]: New session 22 of user core. Feb 13 15:04:41.103487 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:04:41.212343 sshd[4308]: Connection closed by 10.0.0.1 port 58002 Feb 13 15:04:41.212275 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:41.222864 systemd[1]: sshd@21-10.0.0.8:22-10.0.0.1:58002.service: Deactivated successfully. Feb 13 15:04:41.224350 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:04:41.224944 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:04:41.237580 systemd[1]: Started sshd@22-10.0.0.8:22-10.0.0.1:58010.service - OpenSSH per-connection server daemon (10.0.0.1:58010). Feb 13 15:04:41.238624 systemd-logind[1472]: Removed session 22. Feb 13 15:04:41.276842 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 58010 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:41.277958 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:41.282406 systemd-logind[1472]: New session 23 of user core. Feb 13 15:04:41.291527 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:04:43.426350 containerd[1487]: time="2025-02-13T15:04:43.424554972Z" level=info msg="StopContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" with timeout 30 (s)" Feb 13 15:04:43.427672 containerd[1487]: time="2025-02-13T15:04:43.427522788Z" level=info msg="Stop container \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" with signal terminated" Feb 13 15:04:43.457211 systemd[1]: run-containerd-runc-k8s.io-4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574-runc.nJqkzz.mount: Deactivated successfully. Feb 13 15:04:43.458249 systemd[1]: cri-containerd-2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862.scope: Deactivated successfully. Feb 13 15:04:43.475778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862-rootfs.mount: Deactivated successfully. Feb 13 15:04:43.485592 containerd[1487]: time="2025-02-13T15:04:43.485378294Z" level=info msg="shim disconnected" id=2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862 namespace=k8s.io Feb 13 15:04:43.485592 containerd[1487]: time="2025-02-13T15:04:43.485439534Z" level=warning msg="cleaning up after shim disconnected" id=2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862 namespace=k8s.io Feb 13 15:04:43.485592 containerd[1487]: time="2025-02-13T15:04:43.485447814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:43.485992 containerd[1487]: time="2025-02-13T15:04:43.485950657Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:04:43.490711 containerd[1487]: time="2025-02-13T15:04:43.490570801Z" level=info msg="StopContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" with timeout 2 (s)" Feb 13 15:04:43.490889 containerd[1487]: time="2025-02-13T15:04:43.490848442Z" level=info msg="Stop container \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" with signal terminated" Feb 13 15:04:43.499007 systemd-networkd[1411]: lxc_health: Link DOWN Feb 13 15:04:43.499012 systemd-networkd[1411]: lxc_health: Lost carrier Feb 13 15:04:43.514808 systemd[1]: cri-containerd-4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574.scope: Deactivated successfully. Feb 13 15:04:43.515107 systemd[1]: cri-containerd-4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574.scope: Consumed 6.408s CPU time, 122.1M memory peak, 144K read from disk, 12.9M written to disk. Feb 13 15:04:43.531059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574-rootfs.mount: Deactivated successfully. Feb 13 15:04:43.538400 containerd[1487]: time="2025-02-13T15:04:43.538360493Z" level=info msg="StopContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" returns successfully" Feb 13 15:04:43.539245 containerd[1487]: time="2025-02-13T15:04:43.539209378Z" level=info msg="StopPodSandbox for \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\"" Feb 13 15:04:43.539335 containerd[1487]: time="2025-02-13T15:04:43.539263778Z" level=info msg="Container to stop \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.540023 containerd[1487]: time="2025-02-13T15:04:43.539758501Z" level=info msg="shim disconnected" id=4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574 namespace=k8s.io Feb 13 15:04:43.540023 containerd[1487]: time="2025-02-13T15:04:43.539820461Z" level=warning msg="cleaning up after shim disconnected" id=4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574 namespace=k8s.io Feb 13 15:04:43.540023 containerd[1487]: time="2025-02-13T15:04:43.539840781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:43.541208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b-shm.mount: Deactivated successfully. Feb 13 15:04:43.547106 systemd[1]: cri-containerd-1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b.scope: Deactivated successfully. Feb 13 15:04:43.560160 containerd[1487]: time="2025-02-13T15:04:43.560111368Z" level=info msg="StopContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" returns successfully" Feb 13 15:04:43.560699 containerd[1487]: time="2025-02-13T15:04:43.560674691Z" level=info msg="StopPodSandbox for \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\"" Feb 13 15:04:43.560760 containerd[1487]: time="2025-02-13T15:04:43.560720372Z" level=info msg="Container to stop \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.560760 containerd[1487]: time="2025-02-13T15:04:43.560730972Z" level=info msg="Container to stop \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.560760 containerd[1487]: time="2025-02-13T15:04:43.560740332Z" level=info msg="Container to stop \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.560760 containerd[1487]: time="2025-02-13T15:04:43.560748452Z" level=info msg="Container to stop \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.560760 containerd[1487]: time="2025-02-13T15:04:43.560756532Z" level=info msg="Container to stop \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:04:43.567000 systemd[1]: cri-containerd-fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a.scope: Deactivated successfully. Feb 13 15:04:43.591997 containerd[1487]: time="2025-02-13T15:04:43.591837096Z" level=info msg="shim disconnected" id=fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a namespace=k8s.io Feb 13 15:04:43.591997 containerd[1487]: time="2025-02-13T15:04:43.591918816Z" level=warning msg="cleaning up after shim disconnected" id=fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a namespace=k8s.io Feb 13 15:04:43.591997 containerd[1487]: time="2025-02-13T15:04:43.591927497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:43.592194 containerd[1487]: time="2025-02-13T15:04:43.592058617Z" level=info msg="shim disconnected" id=1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b namespace=k8s.io Feb 13 15:04:43.592194 containerd[1487]: time="2025-02-13T15:04:43.592097617Z" level=warning msg="cleaning up after shim disconnected" id=1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b namespace=k8s.io Feb 13 15:04:43.592194 containerd[1487]: time="2025-02-13T15:04:43.592105177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:43.604196 containerd[1487]: time="2025-02-13T15:04:43.604149601Z" level=info msg="TearDown network for sandbox \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\" successfully" Feb 13 15:04:43.604196 containerd[1487]: time="2025-02-13T15:04:43.604185481Z" level=info msg="StopPodSandbox for \"1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b\" returns successfully" Feb 13 15:04:43.614736 containerd[1487]: time="2025-02-13T15:04:43.614682737Z" level=info msg="TearDown network for sandbox \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" successfully" Feb 13 15:04:43.614736 containerd[1487]: time="2025-02-13T15:04:43.614718017Z" level=info msg="StopPodSandbox for \"fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a\" returns successfully" Feb 13 15:04:43.714438 kubelet[2671]: I0213 15:04:43.714300 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cni-path\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714438 kubelet[2671]: I0213 15:04:43.714380 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-etc-cni-netd\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714438 kubelet[2671]: I0213 15:04:43.714409 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-run\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714438 kubelet[2671]: I0213 15:04:43.714437 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-bpf-maps\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714466 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ad923-a5cd-4d71-830c-9cb0bb41983b-cilium-config-path\") pod \"175ad923-a5cd-4d71-830c-9cb0bb41983b\" (UID: \"175ad923-a5cd-4d71-830c-9cb0bb41983b\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714484 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-xtables-lock\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714502 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e56a6ab-64bc-4095-af2a-0373950228a4-clustermesh-secrets\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714515 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-hostproc\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714528 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-lib-modules\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.714875 kubelet[2671]: I0213 15:04:43.714548 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-hubble-tls\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714564 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvsk5\" (UniqueName: \"kubernetes.io/projected/175ad923-a5cd-4d71-830c-9cb0bb41983b-kube-api-access-jvsk5\") pod \"175ad923-a5cd-4d71-830c-9cb0bb41983b\" (UID: \"175ad923-a5cd-4d71-830c-9cb0bb41983b\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714578 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-net\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714595 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smn2x\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-kube-api-access-smn2x\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714609 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-cgroup\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714626 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-kernel\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.715002 kubelet[2671]: I0213 15:04:43.714644 2671 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-config-path\") pod \"7e56a6ab-64bc-4095-af2a-0373950228a4\" (UID: \"7e56a6ab-64bc-4095-af2a-0373950228a4\") " Feb 13 15:04:43.724923 kubelet[2671]: I0213 15:04:43.724684 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.724923 kubelet[2671]: I0213 15:04:43.724717 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.724923 kubelet[2671]: I0213 15:04:43.724685 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.725831 kubelet[2671]: I0213 15:04:43.725792 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.725866 kubelet[2671]: I0213 15:04:43.725833 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.725866 kubelet[2671]: I0213 15:04:43.725848 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.725866 kubelet[2671]: I0213 15:04:43.725856 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.725935 kubelet[2671]: I0213 15:04:43.725871 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.730180 kubelet[2671]: I0213 15:04:43.728958 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175ad923-a5cd-4d71-830c-9cb0bb41983b-kube-api-access-jvsk5" (OuterVolumeSpecName: "kube-api-access-jvsk5") pod "175ad923-a5cd-4d71-830c-9cb0bb41983b" (UID: "175ad923-a5cd-4d71-830c-9cb0bb41983b"). InnerVolumeSpecName "kube-api-access-jvsk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:04:43.730180 kubelet[2671]: I0213 15:04:43.729008 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.730180 kubelet[2671]: I0213 15:04:43.729089 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:04:43.730180 kubelet[2671]: I0213 15:04:43.730090 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175ad923-a5cd-4d71-830c-9cb0bb41983b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "175ad923-a5cd-4d71-830c-9cb0bb41983b" (UID: "175ad923-a5cd-4d71-830c-9cb0bb41983b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:04:43.731065 kubelet[2671]: I0213 15:04:43.731021 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:04:43.731598 kubelet[2671]: I0213 15:04:43.731562 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-kube-api-access-smn2x" (OuterVolumeSpecName: "kube-api-access-smn2x") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "kube-api-access-smn2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:04:43.731894 kubelet[2671]: I0213 15:04:43.731857 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:04:43.732559 kubelet[2671]: I0213 15:04:43.732533 2671 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e56a6ab-64bc-4095-af2a-0373950228a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7e56a6ab-64bc-4095-af2a-0373950228a4" (UID: "7e56a6ab-64bc-4095-af2a-0373950228a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:04:43.814941 kubelet[2671]: I0213 15:04:43.814893 2671 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815133 kubelet[2671]: I0213 15:04:43.815121 2671 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815190 kubelet[2671]: I0213 15:04:43.815181 2671 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815245 kubelet[2671]: I0213 15:04:43.815235 2671 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815292 kubelet[2671]: I0213 15:04:43.815283 2671 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815373 kubelet[2671]: I0213 15:04:43.815362 2671 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815424 kubelet[2671]: I0213 15:04:43.815414 2671 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e56a6ab-64bc-4095-af2a-0373950228a4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815479 kubelet[2671]: I0213 15:04:43.815470 2671 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815529 kubelet[2671]: I0213 15:04:43.815520 2671 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815608 kubelet[2671]: I0213 15:04:43.815598 2671 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/175ad923-a5cd-4d71-830c-9cb0bb41983b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815655 kubelet[2671]: I0213 15:04:43.815647 2671 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815706 kubelet[2671]: I0213 15:04:43.815697 2671 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jvsk5\" (UniqueName: \"kubernetes.io/projected/175ad923-a5cd-4d71-830c-9cb0bb41983b-kube-api-access-jvsk5\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815760 kubelet[2671]: I0213 15:04:43.815750 2671 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815815 kubelet[2671]: I0213 15:04:43.815805 2671 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815863 kubelet[2671]: I0213 15:04:43.815855 2671 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e56a6ab-64bc-4095-af2a-0373950228a4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:43.815913 kubelet[2671]: I0213 15:04:43.815905 2671 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-smn2x\" (UniqueName: \"kubernetes.io/projected/7e56a6ab-64bc-4095-af2a-0373950228a4-kube-api-access-smn2x\") on node \"localhost\" DevicePath \"\"" Feb 13 15:04:44.366298 systemd[1]: Removed slice kubepods-burstable-pod7e56a6ab_64bc_4095_af2a_0373950228a4.slice - libcontainer container kubepods-burstable-pod7e56a6ab_64bc_4095_af2a_0373950228a4.slice. Feb 13 15:04:44.366754 systemd[1]: kubepods-burstable-pod7e56a6ab_64bc_4095_af2a_0373950228a4.slice: Consumed 6.537s CPU time, 122.4M memory peak, 200K read from disk, 12.9M written to disk. Feb 13 15:04:44.367819 systemd[1]: Removed slice kubepods-besteffort-pod175ad923_a5cd_4d71_830c_9cb0bb41983b.slice - libcontainer container kubepods-besteffort-pod175ad923_a5cd_4d71_830c_9cb0bb41983b.slice. Feb 13 15:04:44.451703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f710596bf892de31cf5dd15a616c9b6b1bd46cbbe6c06fab70a5534f8d5052b-rootfs.mount: Deactivated successfully. Feb 13 15:04:44.451800 systemd[1]: var-lib-kubelet-pods-175ad923\x2da5cd\x2d4d71\x2d830c\x2d9cb0bb41983b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvsk5.mount: Deactivated successfully. Feb 13 15:04:44.451854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a-rootfs.mount: Deactivated successfully. Feb 13 15:04:44.451905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe724a4931914c26eecf9edd51e4b2a662ed1cea140269e028991c1af154226a-shm.mount: Deactivated successfully. Feb 13 15:04:44.451962 systemd[1]: var-lib-kubelet-pods-7e56a6ab\x2d64bc\x2d4095\x2daf2a\x2d0373950228a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmn2x.mount: Deactivated successfully. Feb 13 15:04:44.452008 systemd[1]: var-lib-kubelet-pods-7e56a6ab\x2d64bc\x2d4095\x2daf2a\x2d0373950228a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:04:44.452061 systemd[1]: var-lib-kubelet-pods-7e56a6ab\x2d64bc\x2d4095\x2daf2a\x2d0373950228a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:04:44.554284 kubelet[2671]: I0213 15:04:44.554238 2671 scope.go:117] "RemoveContainer" containerID="2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862" Feb 13 15:04:44.556037 containerd[1487]: time="2025-02-13T15:04:44.555948151Z" level=info msg="RemoveContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\"" Feb 13 15:04:44.559686 containerd[1487]: time="2025-02-13T15:04:44.559642569Z" level=info msg="RemoveContainer for \"2715d4111141df3abf205e4368dfbee37723e7f379f4fc04707042fd04ab3862\" returns successfully" Feb 13 15:04:44.559923 kubelet[2671]: I0213 15:04:44.559900 2671 scope.go:117] "RemoveContainer" containerID="4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574" Feb 13 15:04:44.562172 containerd[1487]: time="2025-02-13T15:04:44.562126982Z" level=info msg="RemoveContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\"" Feb 13 15:04:44.570942 containerd[1487]: time="2025-02-13T15:04:44.570889147Z" level=info msg="RemoveContainer for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" returns successfully" Feb 13 15:04:44.571290 kubelet[2671]: I0213 15:04:44.571267 2671 scope.go:117] "RemoveContainer" containerID="908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5" Feb 13 15:04:44.573634 containerd[1487]: time="2025-02-13T15:04:44.573544081Z" level=info msg="RemoveContainer for \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\"" Feb 13 15:04:44.581161 containerd[1487]: time="2025-02-13T15:04:44.581110360Z" level=info msg="RemoveContainer for \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\" returns successfully" Feb 13 15:04:44.581592 kubelet[2671]: I0213 15:04:44.581368 2671 scope.go:117] "RemoveContainer" containerID="43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a" Feb 13 15:04:44.583296 containerd[1487]: time="2025-02-13T15:04:44.582949009Z" level=info msg="RemoveContainer for \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\"" Feb 13 15:04:44.586782 containerd[1487]: time="2025-02-13T15:04:44.586745669Z" level=info msg="RemoveContainer for \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\" returns successfully" Feb 13 15:04:44.587248 kubelet[2671]: I0213 15:04:44.587226 2671 scope.go:117] "RemoveContainer" containerID="384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be" Feb 13 15:04:44.589630 containerd[1487]: time="2025-02-13T15:04:44.589554283Z" level=info msg="RemoveContainer for \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\"" Feb 13 15:04:44.592478 containerd[1487]: time="2025-02-13T15:04:44.592437458Z" level=info msg="RemoveContainer for \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\" returns successfully" Feb 13 15:04:44.592723 kubelet[2671]: I0213 15:04:44.592699 2671 scope.go:117] "RemoveContainer" containerID="fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b" Feb 13 15:04:44.593741 containerd[1487]: time="2025-02-13T15:04:44.593701185Z" level=info msg="RemoveContainer for \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\"" Feb 13 15:04:44.596296 containerd[1487]: time="2025-02-13T15:04:44.596265398Z" level=info msg="RemoveContainer for \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\" returns successfully" Feb 13 15:04:44.596523 kubelet[2671]: I0213 15:04:44.596499 2671 scope.go:117] "RemoveContainer" containerID="4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574" Feb 13 15:04:44.596977 containerd[1487]: time="2025-02-13T15:04:44.596916961Z" level=error msg="ContainerStatus for \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\": not found" Feb 13 15:04:44.597187 kubelet[2671]: E0213 15:04:44.597156 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\": not found" containerID="4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574" Feb 13 15:04:44.597269 kubelet[2671]: I0213 15:04:44.597195 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574"} err="failed to get container status \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b974e59dbaa1ffb9297a18e02e930a7ef2ea3c9b2b55bcb906c380ed0b7c574\": not found" Feb 13 15:04:44.597298 kubelet[2671]: I0213 15:04:44.597275 2671 scope.go:117] "RemoveContainer" containerID="908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5" Feb 13 15:04:44.597671 containerd[1487]: time="2025-02-13T15:04:44.597532404Z" level=error msg="ContainerStatus for \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\": not found" Feb 13 15:04:44.597944 kubelet[2671]: E0213 15:04:44.597805 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\": not found" containerID="908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5" Feb 13 15:04:44.597944 kubelet[2671]: I0213 15:04:44.597855 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5"} err="failed to get container status \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"908ae3fb47df9fad29f4953c7223bd1dc409f32ca6a41afb26ebc5644990e4f5\": not found" Feb 13 15:04:44.597944 kubelet[2671]: I0213 15:04:44.597875 2671 scope.go:117] "RemoveContainer" containerID="43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a" Feb 13 15:04:44.598060 containerd[1487]: time="2025-02-13T15:04:44.598013407Z" level=error msg="ContainerStatus for \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\": not found" Feb 13 15:04:44.598252 kubelet[2671]: E0213 15:04:44.598149 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\": not found" containerID="43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a" Feb 13 15:04:44.598252 kubelet[2671]: I0213 15:04:44.598175 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a"} err="failed to get container status \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\": rpc error: code = NotFound desc = an error occurred when try to find container \"43b3ffda9ff4cca8bd1c560d85a8d0a18680d1acd6452724ff6352932d57329a\": not found" Feb 13 15:04:44.598252 kubelet[2671]: I0213 15:04:44.598196 2671 scope.go:117] "RemoveContainer" containerID="384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be" Feb 13 15:04:44.598495 containerd[1487]: time="2025-02-13T15:04:44.598402409Z" level=error msg="ContainerStatus for \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\": not found" Feb 13 15:04:44.598663 kubelet[2671]: E0213 15:04:44.598626 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\": not found" containerID="384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be" Feb 13 15:04:44.598698 kubelet[2671]: I0213 15:04:44.598668 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be"} err="failed to get container status \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\": rpc error: code = NotFound desc = an error occurred when try to find container \"384301efc634d536da8b681c5bc04d81e3678a9fd28daa3c67ba7ea86a7fc4be\": not found" Feb 13 15:04:44.598698 kubelet[2671]: I0213 15:04:44.598685 2671 scope.go:117] "RemoveContainer" containerID="fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b" Feb 13 15:04:44.598895 containerd[1487]: time="2025-02-13T15:04:44.598854211Z" level=error msg="ContainerStatus for \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\": not found" Feb 13 15:04:44.599029 kubelet[2671]: E0213 15:04:44.599009 2671 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\": not found" containerID="fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b" Feb 13 15:04:44.599075 kubelet[2671]: I0213 15:04:44.599030 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b"} err="failed to get container status \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd9f7ac94a354cc3ea09b50046785a93982958cefcbde7711081274c7239870b\": not found" Feb 13 15:04:45.372824 sshd[4323]: Connection closed by 10.0.0.1 port 58010 Feb 13 15:04:45.373444 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:45.381519 systemd[1]: sshd@22-10.0.0.8:22-10.0.0.1:58010.service: Deactivated successfully. Feb 13 15:04:45.383161 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:04:45.383509 systemd[1]: session-23.scope: Consumed 1.458s CPU time, 26.7M memory peak. Feb 13 15:04:45.384568 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:04:45.385833 systemd[1]: Started sshd@23-10.0.0.8:22-10.0.0.1:45058.service - OpenSSH per-connection server daemon (10.0.0.1:45058). Feb 13 15:04:45.390347 systemd-logind[1472]: Removed session 23. Feb 13 15:04:45.418933 kubelet[2671]: E0213 15:04:45.418901 2671 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:04:45.428903 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 45058 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:45.430186 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:45.433774 systemd-logind[1472]: New session 24 of user core. Feb 13 15:04:45.448455 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:04:46.363393 kubelet[2671]: I0213 15:04:46.360888 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175ad923-a5cd-4d71-830c-9cb0bb41983b" path="/var/lib/kubelet/pods/175ad923-a5cd-4d71-830c-9cb0bb41983b/volumes" Feb 13 15:04:46.363393 kubelet[2671]: I0213 15:04:46.361639 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" path="/var/lib/kubelet/pods/7e56a6ab-64bc-4095-af2a-0373950228a4/volumes" Feb 13 15:04:46.673053 sshd[4487]: Connection closed by 10.0.0.1 port 45058 Feb 13 15:04:46.673639 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:46.683807 systemd[1]: sshd@23-10.0.0.8:22-10.0.0.1:45058.service: Deactivated successfully. Feb 13 15:04:46.686791 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:04:46.687078 systemd[1]: session-24.scope: Consumed 1.139s CPU time, 24.2M memory peak. Feb 13 15:04:46.688309 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:04:46.697498 kubelet[2671]: I0213 15:04:46.697161 2671 topology_manager.go:215] "Topology Admit Handler" podUID="7883813e-ed66-4781-9f47-c560a1c58b81" podNamespace="kube-system" podName="cilium-mv8mj" Feb 13 15:04:46.697498 kubelet[2671]: E0213 15:04:46.697215 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="apply-sysctl-overwrites" Feb 13 15:04:46.697498 kubelet[2671]: E0213 15:04:46.697223 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="clean-cilium-state" Feb 13 15:04:46.699241 kubelet[2671]: E0213 15:04:46.698677 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="mount-cgroup" Feb 13 15:04:46.699241 kubelet[2671]: E0213 15:04:46.698702 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="mount-bpf-fs" Feb 13 15:04:46.699241 kubelet[2671]: E0213 15:04:46.698710 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="175ad923-a5cd-4d71-830c-9cb0bb41983b" containerName="cilium-operator" Feb 13 15:04:46.699241 kubelet[2671]: E0213 15:04:46.698716 2671 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="cilium-agent" Feb 13 15:04:46.698038 systemd[1]: Started sshd@24-10.0.0.8:22-10.0.0.1:45060.service - OpenSSH per-connection server daemon (10.0.0.1:45060). Feb 13 15:04:46.701212 kubelet[2671]: I0213 15:04:46.699903 2671 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e56a6ab-64bc-4095-af2a-0373950228a4" containerName="cilium-agent" Feb 13 15:04:46.701212 kubelet[2671]: I0213 15:04:46.699928 2671 memory_manager.go:354] "RemoveStaleState removing state" podUID="175ad923-a5cd-4d71-830c-9cb0bb41983b" containerName="cilium-operator" Feb 13 15:04:46.705284 systemd-logind[1472]: Removed session 24. Feb 13 15:04:46.712667 systemd[1]: Created slice kubepods-burstable-pod7883813e_ed66_4781_9f47_c560a1c58b81.slice - libcontainer container kubepods-burstable-pod7883813e_ed66_4781_9f47_c560a1c58b81.slice. Feb 13 15:04:46.749832 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 45060 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:46.750982 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:46.755102 systemd-logind[1472]: New session 25 of user core. Feb 13 15:04:46.762466 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:04:46.813050 sshd[4503]: Connection closed by 10.0.0.1 port 45060 Feb 13 15:04:46.813491 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:46.824280 systemd[1]: sshd@24-10.0.0.8:22-10.0.0.1:45060.service: Deactivated successfully. Feb 13 15:04:46.825917 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:04:46.827463 systemd-logind[1472]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832607 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-cni-path\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832650 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhk8q\" (UniqueName: \"kubernetes.io/projected/7883813e-ed66-4781-9f47-c560a1c58b81-kube-api-access-nhk8q\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832670 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-cilium-run\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832685 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-bpf-maps\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832701 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7883813e-ed66-4781-9f47-c560a1c58b81-cilium-config-path\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832744 kubelet[2671]: I0213 15:04:46.832716 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7883813e-ed66-4781-9f47-c560a1c58b81-hubble-tls\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832943 kubelet[2671]: I0213 15:04:46.832762 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-etc-cni-netd\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832943 kubelet[2671]: I0213 15:04:46.832804 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-xtables-lock\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.832834 systemd[1]: Started sshd@25-10.0.0.8:22-10.0.0.1:45062.service - OpenSSH per-connection server daemon (10.0.0.1:45062). Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833164 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7883813e-ed66-4781-9f47-c560a1c58b81-clustermesh-secrets\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833212 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-cilium-cgroup\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833237 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-host-proc-sys-kernel\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833260 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-hostproc\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833275 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-lib-modules\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833676 kubelet[2671]: I0213 15:04:46.833291 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7883813e-ed66-4781-9f47-c560a1c58b81-cilium-ipsec-secrets\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.833882 kubelet[2671]: I0213 15:04:46.833304 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7883813e-ed66-4781-9f47-c560a1c58b81-host-proc-sys-net\") pod \"cilium-mv8mj\" (UID: \"7883813e-ed66-4781-9f47-c560a1c58b81\") " pod="kube-system/cilium-mv8mj" Feb 13 15:04:46.834157 systemd-logind[1472]: Removed session 25. Feb 13 15:04:46.869477 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 45062 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE Feb 13 15:04:46.870508 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:04:46.874385 systemd-logind[1472]: New session 26 of user core. Feb 13 15:04:46.881480 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:04:47.015765 containerd[1487]: time="2025-02-13T15:04:47.015642573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mv8mj,Uid:7883813e-ed66-4781-9f47-c560a1c58b81,Namespace:kube-system,Attempt:0,}" Feb 13 15:04:47.033130 containerd[1487]: time="2025-02-13T15:04:47.033026295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:04:47.033237 containerd[1487]: time="2025-02-13T15:04:47.033083336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:04:47.033237 containerd[1487]: time="2025-02-13T15:04:47.033168536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:47.033381 containerd[1487]: time="2025-02-13T15:04:47.033280257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:04:47.051510 systemd[1]: Started cri-containerd-b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287.scope - libcontainer container b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287. Feb 13 15:04:47.071847 containerd[1487]: time="2025-02-13T15:04:47.071808759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mv8mj,Uid:7883813e-ed66-4781-9f47-c560a1c58b81,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\"" Feb 13 15:04:47.075404 containerd[1487]: time="2025-02-13T15:04:47.075370296Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:04:47.086373 containerd[1487]: time="2025-02-13T15:04:47.085902106Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4\"" Feb 13 15:04:47.087271 containerd[1487]: time="2025-02-13T15:04:47.086584189Z" level=info msg="StartContainer for \"65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4\"" Feb 13 15:04:47.110539 systemd[1]: Started cri-containerd-65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4.scope - libcontainer container 65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4. Feb 13 15:04:47.130658 containerd[1487]: time="2025-02-13T15:04:47.130538197Z" level=info msg="StartContainer for \"65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4\" returns successfully" Feb 13 15:04:47.156267 systemd[1]: cri-containerd-65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4.scope: Deactivated successfully. Feb 13 15:04:47.183818 containerd[1487]: time="2025-02-13T15:04:47.183737569Z" level=info msg="shim disconnected" id=65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4 namespace=k8s.io Feb 13 15:04:47.183818 containerd[1487]: time="2025-02-13T15:04:47.183805809Z" level=warning msg="cleaning up after shim disconnected" id=65b2f4de9bb2a1ef59142c5d0539c804a21c12ed3eb38938390c0357ffba97f4 namespace=k8s.io Feb 13 15:04:47.183818 containerd[1487]: time="2025-02-13T15:04:47.183814409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:47.574153 containerd[1487]: time="2025-02-13T15:04:47.573620495Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:04:47.585033 containerd[1487]: time="2025-02-13T15:04:47.584896268Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1\"" Feb 13 15:04:47.585985 containerd[1487]: time="2025-02-13T15:04:47.585802512Z" level=info msg="StartContainer for \"aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1\"" Feb 13 15:04:47.614485 systemd[1]: Started cri-containerd-aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1.scope - libcontainer container aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1. Feb 13 15:04:47.635029 containerd[1487]: time="2025-02-13T15:04:47.634926865Z" level=info msg="StartContainer for \"aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1\" returns successfully" Feb 13 15:04:47.643081 systemd[1]: cri-containerd-aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1.scope: Deactivated successfully. Feb 13 15:04:47.661446 containerd[1487]: time="2025-02-13T15:04:47.661387310Z" level=info msg="shim disconnected" id=aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1 namespace=k8s.io Feb 13 15:04:47.661574 containerd[1487]: time="2025-02-13T15:04:47.661501031Z" level=warning msg="cleaning up after shim disconnected" id=aef241aa04b21f973a9f297315df04662249b1ea83cf43d966594c17aff48cd1 namespace=k8s.io Feb 13 15:04:47.661574 containerd[1487]: time="2025-02-13T15:04:47.661527671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:47.671340 containerd[1487]: time="2025-02-13T15:04:47.670308832Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:04:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:04:48.577289 containerd[1487]: time="2025-02-13T15:04:48.577246733Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:04:48.589885 containerd[1487]: time="2025-02-13T15:04:48.589823311Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683\"" Feb 13 15:04:48.590739 containerd[1487]: time="2025-02-13T15:04:48.590555834Z" level=info msg="StartContainer for \"ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683\"" Feb 13 15:04:48.617474 systemd[1]: Started cri-containerd-ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683.scope - libcontainer container ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683. Feb 13 15:04:48.639808 systemd[1]: cri-containerd-ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683.scope: Deactivated successfully. Feb 13 15:04:48.640645 containerd[1487]: time="2025-02-13T15:04:48.640593305Z" level=info msg="StartContainer for \"ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683\" returns successfully" Feb 13 15:04:48.659920 containerd[1487]: time="2025-02-13T15:04:48.659872754Z" level=info msg="shim disconnected" id=ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683 namespace=k8s.io Feb 13 15:04:48.659920 containerd[1487]: time="2025-02-13T15:04:48.659917514Z" level=warning msg="cleaning up after shim disconnected" id=ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683 namespace=k8s.io Feb 13 15:04:48.660073 containerd[1487]: time="2025-02-13T15:04:48.659925034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:48.938069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff38be23c1e8ae9d6901c6ce24e4c45f61e9224c95f764cef45a27bf74a6e683-rootfs.mount: Deactivated successfully. Feb 13 15:04:49.580946 containerd[1487]: time="2025-02-13T15:04:49.580890206Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:04:49.596123 containerd[1487]: time="2025-02-13T15:04:49.596071714Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d\"" Feb 13 15:04:49.597124 containerd[1487]: time="2025-02-13T15:04:49.597063439Z" level=info msg="StartContainer for \"82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d\"" Feb 13 15:04:49.622500 systemd[1]: Started cri-containerd-82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d.scope - libcontainer container 82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d. Feb 13 15:04:49.642031 systemd[1]: cri-containerd-82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d.scope: Deactivated successfully. Feb 13 15:04:49.644696 containerd[1487]: time="2025-02-13T15:04:49.644595212Z" level=info msg="StartContainer for \"82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d\" returns successfully" Feb 13 15:04:49.665407 containerd[1487]: time="2025-02-13T15:04:49.665347825Z" level=info msg="shim disconnected" id=82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d namespace=k8s.io Feb 13 15:04:49.665407 containerd[1487]: time="2025-02-13T15:04:49.665401465Z" level=warning msg="cleaning up after shim disconnected" id=82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d namespace=k8s.io Feb 13 15:04:49.665407 containerd[1487]: time="2025-02-13T15:04:49.665410425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:04:49.938193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82f262002939c0916d27608ffb3d4f5be3fe19538707cfa2d16c8ddb0abd6e8d-rootfs.mount: Deactivated successfully. Feb 13 15:04:50.420086 kubelet[2671]: E0213 15:04:50.420039 2671 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:04:50.584509 containerd[1487]: time="2025-02-13T15:04:50.584464277Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:04:50.608182 containerd[1487]: time="2025-02-13T15:04:50.608104140Z" level=info msg="CreateContainer within sandbox \"b2ae440c0a389afdf7e5cdb1b4cba9d4d648fcd5c7f25b130e0f03113432f287\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14\"" Feb 13 15:04:50.608799 containerd[1487]: time="2025-02-13T15:04:50.608758423Z" level=info msg="StartContainer for \"f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14\"" Feb 13 15:04:50.634506 systemd[1]: Started cri-containerd-f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14.scope - libcontainer container f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14. Feb 13 15:04:50.657860 containerd[1487]: time="2025-02-13T15:04:50.657795477Z" level=info msg="StartContainer for \"f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14\" returns successfully" Feb 13 15:04:50.927356 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:04:50.938188 systemd[1]: run-containerd-runc-k8s.io-f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14-runc.eL7JSB.mount: Deactivated successfully. Feb 13 15:04:51.597808 kubelet[2671]: I0213 15:04:51.597372 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mv8mj" podStartSLOduration=5.5973548300000004 podStartE2EDuration="5.59735483s" podCreationTimestamp="2025-02-13 15:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:04:51.596806068 +0000 UTC m=+81.319495484" watchObservedRunningTime="2025-02-13 15:04:51.59735483 +0000 UTC m=+81.320044246" Feb 13 15:04:51.784916 kubelet[2671]: I0213 15:04:51.784870 2671 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:04:51Z","lastTransitionTime":"2025-02-13T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:04:53.659886 systemd-networkd[1411]: lxc_health: Link UP Feb 13 15:04:53.664497 systemd-networkd[1411]: lxc_health: Gained carrier Feb 13 15:04:55.298442 systemd[1]: run-containerd-runc-k8s.io-f36864c01d76c5987fc316dca77b4dd703774508b299d75170a274ce77482f14-runc.kgxWsH.mount: Deactivated successfully. Feb 13 15:04:55.625468 systemd-networkd[1411]: lxc_health: Gained IPv6LL Feb 13 15:04:59.569012 sshd[4512]: Connection closed by 10.0.0.1 port 45062 Feb 13 15:04:59.569503 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Feb 13 15:04:59.573190 systemd[1]: sshd@25-10.0.0.8:22-10.0.0.1:45062.service: Deactivated successfully. Feb 13 15:04:59.574875 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:04:59.575914 systemd-logind[1472]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:04:59.576909 systemd-logind[1472]: Removed session 26.