Jan 29 11:47:26.905142 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:47:26.905163 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 11:47:26.905173 kernel: KASLR enabled Jan 29 11:47:26.905179 kernel: efi: EFI v2.7 by EDK II Jan 29 11:47:26.905185 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 29 11:47:26.905190 kernel: random: crng init done Jan 29 11:47:26.905197 kernel: ACPI: Early table checksum verification disabled Jan 29 11:47:26.905203 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 29 11:47:26.905210 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:47:26.905217 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905223 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905229 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905235 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905242 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905249 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905257 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905263 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905269 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:47:26.905276 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:47:26.905282 kernel: NUMA: Failed to initialise from firmware Jan 29 11:47:26.905288 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:47:26.905295 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:47:26.905301 kernel: Zone ranges: Jan 29 11:47:26.905308 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:47:26.905314 kernel: DMA32 empty Jan 29 11:47:26.905321 kernel: Normal empty Jan 29 11:47:26.905327 kernel: Movable zone start for each node Jan 29 11:47:26.905334 kernel: Early memory node ranges Jan 29 11:47:26.905340 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:47:26.905347 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:47:26.905353 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:47:26.905359 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:47:26.905365 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:47:26.905372 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:47:26.905378 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:47:26.905384 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:47:26.905391 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:47:26.905398 kernel: psci: probing for conduit method from ACPI. Jan 29 11:47:26.905434 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:47:26.905442 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:47:26.905452 kernel: psci: Trusted OS migration not required Jan 29 11:47:26.905459 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:47:26.905466 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:47:26.905474 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:47:26.905481 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:47:26.905488 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:47:26.905494 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:47:26.905501 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:47:26.905508 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:47:26.905515 kernel: CPU features: detected: Spectre-v4 Jan 29 11:47:26.905522 kernel: CPU features: detected: Spectre-BHB Jan 29 11:47:26.905528 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:47:26.905535 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:47:26.905543 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:47:26.905550 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:47:26.905557 kernel: alternatives: applying boot alternatives Jan 29 11:47:26.905564 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:47:26.905572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:47:26.905579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:47:26.905585 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:47:26.905592 kernel: Fallback order for Node 0: 0 Jan 29 11:47:26.905599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:47:26.905606 kernel: Policy zone: DMA Jan 29 11:47:26.905612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:47:26.905620 kernel: software IO TLB: area num 4. Jan 29 11:47:26.905627 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:47:26.905634 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 29 11:47:26.905641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:47:26.905648 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:47:26.905655 kernel: rcu: RCU event tracing is enabled. Jan 29 11:47:26.905662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:47:26.905669 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:47:26.905676 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:47:26.905683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:47:26.905690 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:47:26.905696 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:47:26.905704 kernel: GICv3: 256 SPIs implemented Jan 29 11:47:26.905711 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:47:26.905718 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:47:26.905725 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:47:26.905731 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:47:26.905738 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:47:26.905745 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:47:26.905752 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:47:26.905759 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:47:26.905765 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:47:26.905772 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:47:26.905780 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:47:26.905787 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:47:26.905794 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:47:26.905801 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:47:26.905807 kernel: arm-pv: using stolen time PV Jan 29 11:47:26.905814 kernel: Console: colour dummy device 80x25 Jan 29 11:47:26.905821 kernel: ACPI: Core revision 20230628 Jan 29 11:47:26.905828 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:47:26.905835 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:47:26.905842 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:47:26.905850 kernel: landlock: Up and running. Jan 29 11:47:26.905857 kernel: SELinux: Initializing. Jan 29 11:47:26.905864 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:47:26.905872 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:47:26.905879 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:47:26.905886 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:47:26.905892 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:47:26.905899 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:47:26.905906 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:47:26.905914 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:47:26.905921 kernel: Remapping and enabling EFI services. Jan 29 11:47:26.905928 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:47:26.905935 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:47:26.905942 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:47:26.905949 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:47:26.905956 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:47:26.905963 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:47:26.905970 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:47:26.905977 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:47:26.905985 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:47:26.905992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:47:26.906003 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:47:26.906011 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:47:26.906019 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:47:26.906026 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:47:26.906033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:47:26.906040 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:47:26.906048 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:47:26.906056 kernel: SMP: Total of 4 processors activated. Jan 29 11:47:26.906064 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:47:26.906071 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:47:26.906078 kernel: CPU features: detected: Common not Private translations Jan 29 11:47:26.906086 kernel: CPU features: detected: CRC32 instructions Jan 29 11:47:26.906093 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:47:26.906100 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:47:26.906108 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:47:26.906116 kernel: CPU features: detected: Privileged Access Never Jan 29 11:47:26.906123 kernel: CPU features: detected: RAS Extension Support Jan 29 11:47:26.906131 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:47:26.906138 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:47:26.906145 kernel: alternatives: applying system-wide alternatives Jan 29 11:47:26.906153 kernel: devtmpfs: initialized Jan 29 11:47:26.906160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:47:26.906167 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:47:26.906175 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:47:26.906183 kernel: SMBIOS 3.0.0 present. Jan 29 11:47:26.906191 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 29 11:47:26.906198 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:47:26.906205 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:47:26.906213 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:47:26.906220 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:47:26.906227 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:47:26.906234 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Jan 29 11:47:26.906242 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:47:26.906250 kernel: cpuidle: using governor menu Jan 29 11:47:26.906257 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:47:26.906264 kernel: ASID allocator initialised with 32768 entries Jan 29 11:47:26.906272 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:47:26.906279 kernel: Serial: AMBA PL011 UART driver Jan 29 11:47:26.906286 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:47:26.906294 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:47:26.906301 kernel: Modules: 509040 pages in range for PLT usage Jan 29 11:47:26.906308 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:47:26.906317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:47:26.906324 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:47:26.906331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:47:26.906339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:47:26.906346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:47:26.906353 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:47:26.906360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:47:26.906367 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:47:26.906375 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:47:26.906383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:47:26.906390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:47:26.906397 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:47:26.906416 kernel: ACPI: Interpreter enabled Jan 29 11:47:26.906424 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:47:26.906431 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:47:26.906448 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:47:26.906456 kernel: printk: console [ttyAMA0] enabled Jan 29 11:47:26.906464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:47:26.906615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:47:26.906697 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:47:26.906764 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:47:26.906827 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:47:26.906893 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:47:26.906902 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:47:26.906910 kernel: PCI host bridge to bus 0000:00 Jan 29 11:47:26.906981 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:47:26.907041 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:47:26.907100 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:47:26.907158 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:47:26.907236 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:47:26.907310 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:47:26.907379 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:47:26.907529 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:47:26.907616 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:47:26.907718 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:47:26.907786 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:47:26.907873 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:47:26.907937 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:47:26.907997 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:47:26.908062 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:47:26.908072 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:47:26.908080 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:47:26.908087 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:47:26.908095 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:47:26.908102 kernel: iommu: Default domain type: Translated Jan 29 11:47:26.908109 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:47:26.908117 kernel: efivars: Registered efivars operations Jan 29 11:47:26.908126 kernel: vgaarb: loaded Jan 29 11:47:26.908134 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:47:26.908141 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:47:26.908149 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:47:26.908156 kernel: pnp: PnP ACPI init Jan 29 11:47:26.908228 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:47:26.908239 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:47:26.908247 kernel: NET: Registered PF_INET protocol family Jan 29 11:47:26.908257 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:47:26.908265 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:47:26.908272 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:47:26.908280 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:47:26.908287 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:47:26.908295 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:47:26.908302 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:47:26.908310 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:47:26.908317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:47:26.908326 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:47:26.908334 kernel: kvm [1]: HYP mode not available Jan 29 11:47:26.908341 kernel: Initialise system trusted keyrings Jan 29 11:47:26.908348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:47:26.908356 kernel: Key type asymmetric registered Jan 29 11:47:26.908363 kernel: Asymmetric key parser 'x509' registered Jan 29 11:47:26.908370 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:47:26.908378 kernel: io scheduler mq-deadline registered Jan 29 11:47:26.908385 kernel: io scheduler kyber registered Jan 29 11:47:26.908394 kernel: io scheduler bfq registered Jan 29 11:47:26.908419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:47:26.908427 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:47:26.908435 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:47:26.908513 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:47:26.908524 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:47:26.908531 kernel: thunder_xcv, ver 1.0 Jan 29 11:47:26.908539 kernel: thunder_bgx, ver 1.0 Jan 29 11:47:26.908547 kernel: nicpf, ver 1.0 Jan 29 11:47:26.908557 kernel: nicvf, ver 1.0 Jan 29 11:47:26.908640 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:47:26.908709 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:47:26 UTC (1738151246) Jan 29 11:47:26.908720 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:47:26.908727 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:47:26.908735 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:47:26.908743 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:47:26.908750 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:47:26.908759 kernel: Segment Routing with IPv6 Jan 29 11:47:26.908767 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:47:26.908774 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:47:26.908782 kernel: Key type dns_resolver registered Jan 29 11:47:26.908789 kernel: registered taskstats version 1 Jan 29 11:47:26.908797 kernel: Loading compiled-in X.509 certificates Jan 29 11:47:26.908804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 11:47:26.908812 kernel: Key type .fscrypt registered Jan 29 11:47:26.908819 kernel: Key type fscrypt-provisioning registered Jan 29 11:47:26.908828 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:47:26.908835 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:47:26.908843 kernel: ima: No architecture policies found Jan 29 11:47:26.908850 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:47:26.908858 kernel: clk: Disabling unused clocks Jan 29 11:47:26.908865 kernel: Freeing unused kernel memory: 39360K Jan 29 11:47:26.908873 kernel: Run /init as init process Jan 29 11:47:26.908880 kernel: with arguments: Jan 29 11:47:26.908887 kernel: /init Jan 29 11:47:26.908896 kernel: with environment: Jan 29 11:47:26.908903 kernel: HOME=/ Jan 29 11:47:26.908911 kernel: TERM=linux Jan 29 11:47:26.908918 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:47:26.908927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:47:26.908937 systemd[1]: Detected virtualization kvm. Jan 29 11:47:26.908945 systemd[1]: Detected architecture arm64. Jan 29 11:47:26.908953 systemd[1]: Running in initrd. Jan 29 11:47:26.908962 systemd[1]: No hostname configured, using default hostname. Jan 29 11:47:26.908970 systemd[1]: Hostname set to . Jan 29 11:47:26.908978 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:47:26.908986 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:47:26.908994 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:26.909003 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:26.909014 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:47:26.909026 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:47:26.909036 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:47:26.909044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:47:26.909054 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:47:26.909062 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:47:26.909070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:26.909078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:26.909088 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:47:26.909096 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:47:26.909104 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:47:26.909112 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:47:26.909120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:47:26.909128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:47:26.909137 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:47:26.909145 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:47:26.909153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:26.909163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:26.909171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:26.909179 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:47:26.909187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:47:26.909195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:47:26.909203 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:47:26.909212 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:47:26.909219 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:47:26.909227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:47:26.909237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:26.909245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:47:26.909253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:26.909261 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:47:26.909270 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:47:26.909279 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:26.909303 systemd-journald[236]: Collecting audit messages is disabled. Jan 29 11:47:26.909322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:26.909332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:26.909341 systemd-journald[236]: Journal started Jan 29 11:47:26.909360 systemd-journald[236]: Runtime Journal (/run/log/journal/093078b7ed64415db7b3968c129ba067) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:47:26.892277 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 11:47:26.913449 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:47:26.913482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:47:26.914470 kernel: Bridge firewalling registered Jan 29 11:47:26.914499 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:47:26.914645 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 11:47:26.916967 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:26.920161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:47:26.922030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:47:26.923055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:26.930616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:26.937584 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:47:26.938501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:26.939839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:26.942636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:47:26.946952 dracut-cmdline[273]: dracut-dracut-053 Jan 29 11:47:26.949351 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 11:47:26.970335 systemd-resolved[280]: Positive Trust Anchors: Jan 29 11:47:26.970359 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:47:26.970390 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:47:26.975073 systemd-resolved[280]: Defaulting to hostname 'linux'. Jan 29 11:47:26.976550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:47:26.977377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:27.014438 kernel: SCSI subsystem initialized Jan 29 11:47:27.018433 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:47:27.025432 kernel: iscsi: registered transport (tcp) Jan 29 11:47:27.041431 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:47:27.041450 kernel: QLogic iSCSI HBA Driver Jan 29 11:47:27.093264 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:47:27.101584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:47:27.119001 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:47:27.119053 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:47:27.119821 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:47:27.165440 kernel: raid6: neonx8 gen() 15788 MB/s Jan 29 11:47:27.182423 kernel: raid6: neonx4 gen() 15641 MB/s Jan 29 11:47:27.199430 kernel: raid6: neonx2 gen() 13250 MB/s Jan 29 11:47:27.216432 kernel: raid6: neonx1 gen() 10475 MB/s Jan 29 11:47:27.233430 kernel: raid6: int64x8 gen() 6955 MB/s Jan 29 11:47:27.250428 kernel: raid6: int64x4 gen() 7334 MB/s Jan 29 11:47:27.267440 kernel: raid6: int64x2 gen() 6115 MB/s Jan 29 11:47:27.284430 kernel: raid6: int64x1 gen() 5037 MB/s Jan 29 11:47:27.284455 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Jan 29 11:47:27.301433 kernel: raid6: .... xor() 11893 MB/s, rmw enabled Jan 29 11:47:27.301453 kernel: raid6: using neon recovery algorithm Jan 29 11:47:27.306654 kernel: xor: measuring software checksum speed Jan 29 11:47:27.306673 kernel: 8regs : 19778 MB/sec Jan 29 11:47:27.307704 kernel: 32regs : 19641 MB/sec Jan 29 11:47:27.307730 kernel: arm64_neon : 27087 MB/sec Jan 29 11:47:27.307748 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 29 11:47:27.359441 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:47:27.370349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:47:27.381596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:27.393642 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jan 29 11:47:27.397033 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:27.402929 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:47:27.417155 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 29 11:47:27.443003 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:47:27.449559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:47:27.489041 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:27.498551 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:47:27.511465 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:47:27.512633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:47:27.514144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:27.515704 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:47:27.523762 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:47:27.533961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:47:27.538231 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:47:27.544988 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:47:27.545089 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:47:27.545100 kernel: GPT:9289727 != 19775487 Jan 29 11:47:27.545109 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:47:27.545118 kernel: GPT:9289727 != 19775487 Jan 29 11:47:27.545130 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:47:27.545139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:27.544018 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:47:27.544127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:27.545926 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:27.547675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:47:27.547824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:27.549433 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:27.557851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:27.569437 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (504) Jan 29 11:47:27.572675 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (505) Jan 29 11:47:27.571901 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:47:27.573080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:27.583576 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:47:27.587824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:47:27.591319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:47:27.592238 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:47:27.600640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:47:27.602576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:47:27.606220 disk-uuid[549]: Primary Header is updated. Jan 29 11:47:27.606220 disk-uuid[549]: Secondary Entries is updated. Jan 29 11:47:27.606220 disk-uuid[549]: Secondary Header is updated. Jan 29 11:47:27.609430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:27.628060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:28.649441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:47:28.650487 disk-uuid[550]: The operation has completed successfully. Jan 29 11:47:28.673055 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:47:28.673154 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:47:28.689663 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:47:28.692533 sh[574]: Success Jan 29 11:47:28.707430 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:47:28.745943 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:47:28.747522 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:47:28.748258 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:47:28.758853 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 11:47:28.758904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:47:28.758925 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:47:28.759630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:47:28.760655 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:47:28.763768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:47:28.764862 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:47:28.765610 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:47:28.768144 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:47:28.778069 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:47:28.778107 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:47:28.778123 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:28.779925 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:28.786840 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:47:28.788238 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:47:28.852498 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:47:28.853439 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:47:28.865665 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:47:28.867801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:47:28.902694 systemd-networkd[756]: lo: Link UP Jan 29 11:47:28.902705 systemd-networkd[756]: lo: Gained carrier Jan 29 11:47:28.903427 systemd-networkd[756]: Enumeration completed Jan 29 11:47:28.903745 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:47:28.903959 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:28.903962 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:47:28.904750 systemd-networkd[756]: eth0: Link UP Jan 29 11:47:28.904753 systemd-networkd[756]: eth0: Gained carrier Jan 29 11:47:28.904760 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:28.906885 systemd[1]: Reached target network.target - Network. Jan 29 11:47:28.927473 systemd-networkd[756]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:47:28.969882 ignition[755]: Ignition 2.19.0 Jan 29 11:47:28.969891 ignition[755]: Stage: fetch-offline Jan 29 11:47:28.969927 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:28.969936 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:28.970087 ignition[755]: parsed url from cmdline: "" Jan 29 11:47:28.970090 ignition[755]: no config URL provided Jan 29 11:47:28.970094 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:47:28.970101 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:47:28.970124 ignition[755]: op(1): [started] loading QEMU firmware config module Jan 29 11:47:28.970128 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:47:28.975482 ignition[755]: op(1): [finished] loading QEMU firmware config module Jan 29 11:47:29.011602 ignition[755]: parsing config with SHA512: ecc57c512f4528188f3f40dfee85c5d618d8cfd4fdcf25a4abaa294d70763e961e9bcdac2cdaa74a41ce2f122ce27911c40a0a1e371e6547a30cfa58cd8b75eb Jan 29 11:47:29.017218 unknown[755]: fetched base config from "system" Jan 29 11:47:29.017229 unknown[755]: fetched user config from "qemu" Jan 29 11:47:29.017683 ignition[755]: fetch-offline: fetch-offline passed Jan 29 11:47:29.017751 ignition[755]: Ignition finished successfully Jan 29 11:47:29.020295 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:47:29.021606 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:47:29.027542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:47:29.037597 ignition[771]: Ignition 2.19.0 Jan 29 11:47:29.037606 ignition[771]: Stage: kargs Jan 29 11:47:29.037755 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:29.037765 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:29.038629 ignition[771]: kargs: kargs passed Jan 29 11:47:29.038673 ignition[771]: Ignition finished successfully Jan 29 11:47:29.040512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:47:29.051567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:47:29.061158 ignition[779]: Ignition 2.19.0 Jan 29 11:47:29.061170 ignition[779]: Stage: disks Jan 29 11:47:29.061347 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:29.061357 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:29.062262 ignition[779]: disks: disks passed Jan 29 11:47:29.064351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:47:29.062306 ignition[779]: Ignition finished successfully Jan 29 11:47:29.066273 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:47:29.068423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:47:29.069240 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:47:29.070738 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:47:29.072023 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:47:29.083534 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:47:29.093479 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:47:29.116955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:47:29.131541 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:47:29.170441 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 11:47:29.170519 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:47:29.171756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:47:29.185493 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:47:29.187116 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:47:29.188279 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:47:29.188389 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:47:29.188460 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:47:29.196833 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) Jan 29 11:47:29.196853 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:47:29.196863 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:47:29.196873 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:29.192438 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:47:29.195298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:47:29.201440 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:29.202236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:47:29.241334 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:47:29.245360 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:47:29.249050 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:47:29.251880 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:47:29.322204 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:47:29.335493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:47:29.336848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:47:29.341439 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:47:29.358322 ignition[911]: INFO : Ignition 2.19.0 Jan 29 11:47:29.358322 ignition[911]: INFO : Stage: mount Jan 29 11:47:29.359557 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:29.359557 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:29.359557 ignition[911]: INFO : mount: mount passed Jan 29 11:47:29.359557 ignition[911]: INFO : Ignition finished successfully Jan 29 11:47:29.360763 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:47:29.363901 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:47:29.375506 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:47:29.757875 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:47:29.772590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:47:29.777436 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Jan 29 11:47:29.779616 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 11:47:29.779646 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:47:29.779665 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:47:29.781428 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:47:29.782572 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:47:29.797869 ignition[941]: INFO : Ignition 2.19.0 Jan 29 11:47:29.797869 ignition[941]: INFO : Stage: files Jan 29 11:47:29.799167 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:29.799167 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:29.799167 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:47:29.801778 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:47:29.801778 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:47:29.801778 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:47:29.801778 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:47:29.805648 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:47:29.805648 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:47:29.805648 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:47:29.802006 unknown[941]: wrote ssh authorized keys file for user: core Jan 29 11:47:29.850797 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:47:29.969440 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:47:29.970844 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:47:29.970844 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:47:30.291071 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:47:30.342162 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:47:30.342162 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:47:30.344775 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 11:47:30.611076 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:47:30.764505 systemd-networkd[756]: eth0: Gained IPv6LL Jan 29 11:47:30.835118 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:47:30.835118 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:47:30.838813 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:47:30.858424 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:47:30.861811 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:47:30.862887 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:47:30.862887 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:47:30.862887 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:47:30.862887 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:47:30.862887 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:47:30.862887 ignition[941]: INFO : files: files passed Jan 29 11:47:30.862887 ignition[941]: INFO : Ignition finished successfully Jan 29 11:47:30.863644 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:47:30.877926 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:47:30.879891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:47:30.882515 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:47:30.882605 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:47:30.886733 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:47:30.888649 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:30.888649 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:30.890884 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:47:30.894034 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:47:30.895165 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:47:30.901518 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:47:30.918703 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:47:30.918811 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:47:30.920263 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:47:30.922012 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:47:30.924069 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:47:30.924937 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:47:30.941241 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:47:30.949646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:47:30.956951 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:30.957919 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:30.959643 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:47:30.961167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:47:30.961291 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:47:30.963568 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:47:30.965313 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:47:30.966784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:47:30.968275 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:47:30.970000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:47:30.971722 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:47:30.973314 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:47:30.975038 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:47:30.976786 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:47:30.978282 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:47:30.979663 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:47:30.979773 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:47:30.981912 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:30.983607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:30.985282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:47:30.988464 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:30.989396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:47:30.989557 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:47:30.992120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:47:30.992225 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:47:30.994072 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:47:30.995535 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:47:30.995627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:30.997390 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:47:30.998750 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:47:31.000255 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:47:31.000343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:47:31.002240 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:47:31.002322 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:47:31.003693 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:47:31.003798 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:47:31.005311 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:47:31.005431 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:47:31.020577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:47:31.021900 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:47:31.022731 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:47:31.022839 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:31.024378 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:47:31.024592 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:47:31.029833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:47:31.030634 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:47:31.034318 ignition[996]: INFO : Ignition 2.19.0 Jan 29 11:47:31.034318 ignition[996]: INFO : Stage: umount Jan 29 11:47:31.035884 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:47:31.035884 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:47:31.035884 ignition[996]: INFO : umount: umount passed Jan 29 11:47:31.035884 ignition[996]: INFO : Ignition finished successfully Jan 29 11:47:31.036181 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:47:31.037332 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:47:31.037441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:47:31.039599 systemd[1]: Stopped target network.target - Network. Jan 29 11:47:31.040344 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:47:31.040474 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:47:31.042538 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:47:31.042580 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:47:31.044023 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:47:31.044060 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:47:31.045625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:47:31.045665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:47:31.047304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:47:31.048646 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:47:31.055848 systemd-networkd[756]: eth0: DHCPv6 lease lost Jan 29 11:47:31.057407 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:47:31.057539 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:47:31.058786 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:47:31.058815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:31.070536 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:47:31.071223 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:47:31.071272 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:47:31.073043 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:31.074803 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:47:31.074899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:47:31.083792 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:47:31.083863 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:31.085481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:47:31.085522 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:31.087247 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:47:31.087288 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:31.089250 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:47:31.089371 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:31.091155 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:47:31.091243 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:47:31.092842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:47:31.092890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:31.094530 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:47:31.094565 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:31.096610 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:47:31.096656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:47:31.099098 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:47:31.099144 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:47:31.101087 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:47:31.101125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:47:31.113640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:47:31.114422 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:47:31.114477 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:31.116173 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:47:31.116208 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:31.117697 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:47:31.117733 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:31.119302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:47:31.119341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:31.121247 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:47:31.122525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:47:31.123392 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:47:31.123494 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:47:31.126117 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:47:31.126949 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:47:31.127011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:47:31.128537 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:47:31.137952 systemd[1]: Switching root. Jan 29 11:47:31.165132 systemd-journald[236]: Journal stopped Jan 29 11:47:31.858302 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 29 11:47:31.858353 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:47:31.858366 kernel: SELinux: policy capability open_perms=1 Jan 29 11:47:31.858376 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:47:31.858400 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:47:31.858425 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:47:31.858437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:47:31.858446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:47:31.858456 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:47:31.858466 kernel: audit: type=1403 audit(1738151251.320:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:47:31.858479 systemd[1]: Successfully loaded SELinux policy in 33.624ms. Jan 29 11:47:31.858502 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.148ms. Jan 29 11:47:31.858515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:47:31.858530 systemd[1]: Detected virtualization kvm. Jan 29 11:47:31.858540 systemd[1]: Detected architecture arm64. Jan 29 11:47:31.858551 systemd[1]: Detected first boot. Jan 29 11:47:31.858571 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:47:31.858582 zram_generator::config[1043]: No configuration found. Jan 29 11:47:31.858594 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:47:31.858605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:47:31.858618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:47:31.858631 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:47:31.858642 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:47:31.858653 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:47:31.858665 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:47:31.858675 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:47:31.858687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:47:31.858702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:47:31.858713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:47:31.858723 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:47:31.858736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:47:31.858748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:47:31.858759 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:47:31.858770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:47:31.858781 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:47:31.858801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:47:31.858812 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:47:31.858823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:47:31.858834 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:47:31.858847 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:47:31.858858 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:47:31.858869 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:47:31.858880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:47:31.858891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:47:31.858902 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:47:31.858913 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:47:31.858925 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:47:31.858936 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:47:31.858947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:47:31.858958 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:47:31.858969 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:47:31.858980 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:47:31.858991 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:47:31.859003 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:47:31.859014 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:47:31.859026 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:47:31.859037 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:47:31.859048 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:47:31.859059 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:47:31.859070 systemd[1]: Reached target machines.target - Containers. Jan 29 11:47:31.859081 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:47:31.859092 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:31.859102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:47:31.859113 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:47:31.859125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:31.859255 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:47:31.859269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:31.859281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:47:31.859291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:31.859302 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:47:31.859320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:47:31.859331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:47:31.859344 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:47:31.859358 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:47:31.859370 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:47:31.859380 kernel: fuse: init (API version 7.39) Jan 29 11:47:31.859399 kernel: loop: module loaded Jan 29 11:47:31.859425 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:47:31.859437 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:47:31.859447 kernel: ACPI: bus type drm_connector registered Jan 29 11:47:31.859457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:47:31.859471 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:47:31.859482 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:47:31.859492 systemd[1]: Stopped verity-setup.service. Jan 29 11:47:31.859503 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:47:31.859514 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:47:31.859550 systemd-journald[1103]: Collecting audit messages is disabled. Jan 29 11:47:31.859574 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:47:31.859585 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:47:31.859597 systemd-journald[1103]: Journal started Jan 29 11:47:31.859617 systemd-journald[1103]: Runtime Journal (/run/log/journal/093078b7ed64415db7b3968c129ba067) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:47:31.674211 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:47:31.696067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:47:31.696436 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:47:31.862126 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:47:31.862554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:47:31.863698 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:47:31.864875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:47:31.866010 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:47:31.866157 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:47:31.867310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:31.867479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:31.868692 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:47:31.868821 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:47:31.869970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:31.870097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:31.871427 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:47:31.871573 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:47:31.872704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:31.873505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:31.875021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:47:31.876621 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:47:31.878236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:47:31.880014 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:47:31.892205 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:47:31.900527 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:47:31.902396 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:47:31.903265 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:47:31.903299 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:47:31.905131 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:47:31.907119 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:47:31.908990 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:47:31.909939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:31.911295 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:47:31.913047 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:47:31.914029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:47:31.917585 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:47:31.918496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:47:31.922623 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:47:31.922889 systemd-journald[1103]: Time spent on flushing to /var/log/journal/093078b7ed64415db7b3968c129ba067 is 17.501ms for 858 entries. Jan 29 11:47:31.922889 systemd-journald[1103]: System Journal (/var/log/journal/093078b7ed64415db7b3968c129ba067) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:47:31.958974 systemd-journald[1103]: Received client request to flush runtime journal. Jan 29 11:47:31.959017 kernel: loop0: detected capacity change from 0 to 114328 Jan 29 11:47:31.959030 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:47:31.926758 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:47:31.928700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:47:31.931573 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:47:31.933805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:47:31.937603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:47:31.938711 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:47:31.939982 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:47:31.942889 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:47:31.951607 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:47:31.963982 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:47:31.966169 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:47:31.969513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:47:31.974085 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:47:31.979137 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 29 11:47:31.979155 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 29 11:47:31.983230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:47:31.989438 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 11:47:31.990724 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:47:31.993985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:47:31.995696 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:47:32.019320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:47:32.025433 kernel: loop2: detected capacity change from 0 to 114432 Jan 29 11:47:32.030783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:47:32.042921 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 11:47:32.042941 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 29 11:47:32.046737 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:47:32.056487 kernel: loop3: detected capacity change from 0 to 114328 Jan 29 11:47:32.062456 kernel: loop4: detected capacity change from 0 to 189592 Jan 29 11:47:32.070481 kernel: loop5: detected capacity change from 0 to 114432 Jan 29 11:47:32.069714 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:47:32.070071 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 29 11:47:32.073086 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:47:32.073101 systemd[1]: Reloading... Jan 29 11:47:32.119448 zram_generator::config[1204]: No configuration found. Jan 29 11:47:32.212170 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:47:32.212909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:47:32.248516 systemd[1]: Reloading finished in 175 ms. Jan 29 11:47:32.273927 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:47:32.275156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:47:32.287557 systemd[1]: Starting ensure-sysext.service... Jan 29 11:47:32.289180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:47:32.300857 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:47:32.300875 systemd[1]: Reloading... Jan 29 11:47:32.309046 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:47:32.309676 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:47:32.310439 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:47:32.310751 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:47:32.310880 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:47:32.314838 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:47:32.314939 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:47:32.321861 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:47:32.321970 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:47:32.350437 zram_generator::config[1269]: No configuration found. Jan 29 11:47:32.430891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:47:32.466289 systemd[1]: Reloading finished in 165 ms. Jan 29 11:47:32.485086 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:47:32.495792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:47:32.507630 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:47:32.511489 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:47:32.519775 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:47:32.527734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:47:32.546820 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:47:32.549693 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:47:32.551206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:47:32.554953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:32.557358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:32.560288 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:32.564887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:32.566452 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:32.568521 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:47:32.571616 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:47:32.575309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:32.575492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:32.575688 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 29 11:47:32.577249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:32.577376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:32.579212 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:32.580030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:32.587576 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:47:32.587771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:47:32.592237 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:47:32.595243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:32.607713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:32.609598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:32.613511 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:32.614354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:32.614948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:47:32.616327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:47:32.619526 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:47:32.620963 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:47:32.626858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:32.627013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:32.628255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:32.628391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:32.631931 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:32.632057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:32.644386 augenrules[1355]: No rules Jan 29 11:47:32.645462 systemd[1]: Finished ensure-sysext.service. Jan 29 11:47:32.647446 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:47:32.657333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:47:32.668510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:47:32.672452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:47:32.675753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:47:32.680218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:47:32.680590 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1349) Jan 29 11:47:32.681616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:47:32.684920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:47:32.688659 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:47:32.689894 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:47:32.690318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:47:32.691747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:47:32.693113 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:47:32.694562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:47:32.695581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:47:32.695703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:47:32.697781 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:47:32.708309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:47:32.714129 systemd-resolved[1310]: Positive Trust Anchors: Jan 29 11:47:32.714148 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:47:32.714180 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:47:32.721740 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:47:32.721915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:47:32.723653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:47:32.728745 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jan 29 11:47:32.741403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:47:32.742621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:47:32.753394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:47:32.760712 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:47:32.769812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:47:32.770798 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:47:32.772169 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:47:32.776976 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:47:32.782627 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:47:32.785428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:47:32.790634 systemd-networkd[1383]: lo: Link UP Jan 29 11:47:32.790641 systemd-networkd[1383]: lo: Gained carrier Jan 29 11:47:32.798791 systemd-networkd[1383]: Enumeration completed Jan 29 11:47:32.799072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:47:32.800373 systemd[1]: Reached target network.target - Network. Jan 29 11:47:32.806506 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:32.806515 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:47:32.812596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:47:32.819821 systemd-networkd[1383]: eth0: Link UP Jan 29 11:47:32.819830 systemd-networkd[1383]: eth0: Gained carrier Jan 29 11:47:32.819843 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:47:32.820422 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:47:32.839462 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:47:32.842468 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Jan 29 11:47:32.843022 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:47:32.843073 systemd-timesyncd[1384]: Initial clock synchronization to Wed 2025-01-29 11:47:33.002067 UTC. Jan 29 11:47:32.850314 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:47:32.853249 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:47:32.854820 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:47:32.855854 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:47:32.856888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:47:32.857912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:47:32.859054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:47:32.859968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:47:32.861192 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:47:32.862311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:47:32.862347 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:47:32.863030 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:47:32.864544 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:47:32.867646 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:47:32.879271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:47:32.881176 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:47:32.882541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:47:32.883395 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:47:32.884112 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:47:32.884819 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:47:32.884849 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:47:32.885747 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:47:32.887497 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:47:32.888738 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:47:32.890645 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:47:32.898696 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:47:32.899543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:47:32.904092 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:47:32.907880 jq[1413]: false Jan 29 11:47:32.907907 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:47:32.912605 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:47:32.917674 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:47:32.921694 dbus-daemon[1412]: [system] SELinux support is enabled Jan 29 11:47:32.922634 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:47:32.925283 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:47:32.925694 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:47:32.925877 extend-filesystems[1414]: Found loop3 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found loop4 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found loop5 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda1 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda2 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda3 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found usr Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda4 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda6 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda7 Jan 29 11:47:32.927279 extend-filesystems[1414]: Found vda9 Jan 29 11:47:32.927279 extend-filesystems[1414]: Checking size of /dev/vda9 Jan 29 11:47:32.927088 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:47:32.930240 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:47:32.935966 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:47:32.939522 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:47:32.946868 jq[1429]: true Jan 29 11:47:32.949128 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:47:32.951450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:47:32.951723 extend-filesystems[1414]: Resized partition /dev/vda9 Jan 29 11:47:32.956983 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:47:32.951740 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:47:32.961588 update_engine[1427]: I20250129 11:47:32.959264 1427 main.cc:92] Flatcar Update Engine starting Jan 29 11:47:32.951885 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:47:32.959757 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:47:32.959904 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:47:32.970436 update_engine[1427]: I20250129 11:47:32.967813 1427 update_check_scheduler.cc:74] Next update check in 9m16s Jan 29 11:47:32.973828 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:47:32.974352 jq[1438]: true Jan 29 11:47:32.976055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1361) Jan 29 11:47:32.976093 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:47:32.995095 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:47:33.010127 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:47:32.996178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:47:32.996204 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:47:32.998213 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:47:32.998229 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:47:33.006846 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:47:33.011370 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:47:33.011370 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:47:33.011370 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:47:33.014023 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Jan 29 11:47:33.012082 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:47:33.012248 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:47:33.016984 tar[1437]: linux-arm64/helm Jan 29 11:47:33.036625 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:47:33.039701 systemd-logind[1425]: New seat seat0. Jan 29 11:47:33.040928 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:47:33.076288 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:47:33.077791 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:47:33.078402 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:47:33.079366 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:47:33.193749 containerd[1439]: time="2025-01-29T11:47:33.193666545Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:47:33.222566 containerd[1439]: time="2025-01-29T11:47:33.222528687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224058 containerd[1439]: time="2025-01-29T11:47:33.223986723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224058 containerd[1439]: time="2025-01-29T11:47:33.224024989Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:47:33.224058 containerd[1439]: time="2025-01-29T11:47:33.224040900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:47:33.224204 containerd[1439]: time="2025-01-29T11:47:33.224181114Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:47:33.224231 containerd[1439]: time="2025-01-29T11:47:33.224205306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224277 containerd[1439]: time="2025-01-29T11:47:33.224259483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224277 containerd[1439]: time="2025-01-29T11:47:33.224275189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224462 containerd[1439]: time="2025-01-29T11:47:33.224419157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224495 containerd[1439]: time="2025-01-29T11:47:33.224460932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224495 containerd[1439]: time="2025-01-29T11:47:33.224476475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224495 containerd[1439]: time="2025-01-29T11:47:33.224492222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224585 containerd[1439]: time="2025-01-29T11:47:33.224566674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224816 containerd[1439]: time="2025-01-29T11:47:33.224793253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224929 containerd[1439]: time="2025-01-29T11:47:33.224908705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:47:33.224929 containerd[1439]: time="2025-01-29T11:47:33.224926491Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:47:33.225034 containerd[1439]: time="2025-01-29T11:47:33.225016119Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:47:33.225076 containerd[1439]: time="2025-01-29T11:47:33.225061199Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:47:33.228332 containerd[1439]: time="2025-01-29T11:47:33.228305350Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:47:33.228362 containerd[1439]: time="2025-01-29T11:47:33.228351245Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:47:33.228381 containerd[1439]: time="2025-01-29T11:47:33.228368012Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:47:33.228400 containerd[1439]: time="2025-01-29T11:47:33.228383555Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:47:33.228418 containerd[1439]: time="2025-01-29T11:47:33.228401383Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:47:33.228588 containerd[1439]: time="2025-01-29T11:47:33.228564565Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:47:33.229792 containerd[1439]: time="2025-01-29T11:47:33.229759714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:47:33.229940 containerd[1439]: time="2025-01-29T11:47:33.229917920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:47:33.229966 containerd[1439]: time="2025-01-29T11:47:33.229942071Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:47:33.229966 containerd[1439]: time="2025-01-29T11:47:33.229959858Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:47:33.230001 containerd[1439]: time="2025-01-29T11:47:33.229974422Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230001 containerd[1439]: time="2025-01-29T11:47:33.229987150Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230001 containerd[1439]: time="2025-01-29T11:47:33.229998899Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230056 containerd[1439]: time="2025-01-29T11:47:33.230011872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230056 containerd[1439]: time="2025-01-29T11:47:33.230025457Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230056 containerd[1439]: time="2025-01-29T11:47:33.230039083Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230056 containerd[1439]: time="2025-01-29T11:47:33.230051811Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230121 containerd[1439]: time="2025-01-29T11:47:33.230064866Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:47:33.230121 containerd[1439]: time="2025-01-29T11:47:33.230084448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230121 containerd[1439]: time="2025-01-29T11:47:33.230098033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230121 containerd[1439]: time="2025-01-29T11:47:33.230109374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230121327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230133158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230146498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230158981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230171546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230194 containerd[1439]: time="2025-01-29T11:47:33.230183663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230198186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230210710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230222826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230234412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230253464Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230272638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230284469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230295 containerd[1439]: time="2025-01-29T11:47:33.230296299Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:47:33.230444 containerd[1439]: time="2025-01-29T11:47:33.230408487Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:47:33.230474 containerd[1439]: time="2025-01-29T11:47:33.230424887Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:47:33.230496 containerd[1439]: time="2025-01-29T11:47:33.230476453Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:47:33.230496 containerd[1439]: time="2025-01-29T11:47:33.230490487Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:47:33.230532 containerd[1439]: time="2025-01-29T11:47:33.230500563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230532 containerd[1439]: time="2025-01-29T11:47:33.230512802Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:47:33.230532 containerd[1439]: time="2025-01-29T11:47:33.230522470Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:47:33.230582 containerd[1439]: time="2025-01-29T11:47:33.230532221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:47:33.230992 containerd[1439]: time="2025-01-29T11:47:33.230900646Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:47:33.230992 containerd[1439]: time="2025-01-29T11:47:33.230993579Z" level=info msg="Connect containerd service" Jan 29 11:47:33.231118 containerd[1439]: time="2025-01-29T11:47:33.231020708Z" level=info msg="using legacy CRI server" Jan 29 11:47:33.231118 containerd[1439]: time="2025-01-29T11:47:33.231027643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:47:33.231118 containerd[1439]: time="2025-01-29T11:47:33.231109316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.231766126Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232023628Z" level=info msg="Start subscribing containerd event" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232078661Z" level=info msg="Start recovering state" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232139814Z" level=info msg="Start event monitor" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232151563Z" level=info msg="Start snapshots syncer" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232160334Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232167963Z" level=info msg="Start streaming server" Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232250411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:47:33.233434 containerd[1439]: time="2025-01-29T11:47:33.232289615Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:47:33.232415 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:47:33.233649 containerd[1439]: time="2025-01-29T11:47:33.233557014Z" level=info msg="containerd successfully booted in 0.042340s" Jan 29 11:47:33.356646 tar[1437]: linux-arm64/LICENSE Jan 29 11:47:33.356720 tar[1437]: linux-arm64/README.md Jan 29 11:47:33.369664 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:47:33.482868 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:47:33.501448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:47:33.514755 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:47:33.521054 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:47:33.521234 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:47:33.525628 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:47:33.535521 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:47:33.538049 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:47:33.539934 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:47:33.540992 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:47:34.156114 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 29 11:47:34.159276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:47:34.163043 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:47:34.173689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:47:34.175868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:47:34.177690 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:47:34.191591 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:47:34.191788 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:47:34.193753 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:47:34.202058 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:47:34.666117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:47:34.667506 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:47:34.669740 systemd[1]: Startup finished in 543ms (kernel) + 4.614s (initrd) + 3.386s (userspace) = 8.545s. Jan 29 11:47:34.670254 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:47:35.089912 kubelet[1524]: E0129 11:47:35.089803 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:47:35.092050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:47:35.092177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:47:39.802113 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:47:39.803270 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:58382.service - OpenSSH per-connection server daemon (10.0.0.1:58382). Jan 29 11:47:39.864459 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 58382 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:39.865449 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:39.877138 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:47:39.890783 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:47:39.892495 systemd-logind[1425]: New session 1 of user core. Jan 29 11:47:39.909543 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:47:39.913755 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:47:39.919790 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:47:39.992821 systemd[1541]: Queued start job for default target default.target. Jan 29 11:47:40.003350 systemd[1541]: Created slice app.slice - User Application Slice. Jan 29 11:47:40.003392 systemd[1541]: Reached target paths.target - Paths. Jan 29 11:47:40.003404 systemd[1541]: Reached target timers.target - Timers. Jan 29 11:47:40.004666 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:47:40.014393 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:47:40.014486 systemd[1541]: Reached target sockets.target - Sockets. Jan 29 11:47:40.014503 systemd[1541]: Reached target basic.target - Basic System. Jan 29 11:47:40.014539 systemd[1541]: Reached target default.target - Main User Target. Jan 29 11:47:40.014566 systemd[1541]: Startup finished in 89ms. Jan 29 11:47:40.014788 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:47:40.016171 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:47:40.076843 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:58398.service - OpenSSH per-connection server daemon (10.0.0.1:58398). Jan 29 11:47:40.109601 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 58398 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.110833 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.114854 systemd-logind[1425]: New session 2 of user core. Jan 29 11:47:40.122579 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:47:40.175600 sshd[1552]: pam_unix(sshd:session): session closed for user core Jan 29 11:47:40.186035 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:58398.service: Deactivated successfully. Jan 29 11:47:40.187623 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:47:40.190623 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:47:40.197741 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:58412.service - OpenSSH per-connection server daemon (10.0.0.1:58412). Jan 29 11:47:40.198771 systemd-logind[1425]: Removed session 2. Jan 29 11:47:40.226094 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58412 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.227411 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.231374 systemd-logind[1425]: New session 3 of user core. Jan 29 11:47:40.237574 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:47:40.289171 sshd[1559]: pam_unix(sshd:session): session closed for user core Jan 29 11:47:40.302995 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:58412.service: Deactivated successfully. Jan 29 11:47:40.304406 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:47:40.307554 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:47:40.308715 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). Jan 29 11:47:40.311488 systemd-logind[1425]: Removed session 3. Jan 29 11:47:40.347289 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.347781 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.352329 systemd-logind[1425]: New session 4 of user core. Jan 29 11:47:40.362591 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:47:40.415208 sshd[1566]: pam_unix(sshd:session): session closed for user core Jan 29 11:47:40.428737 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:58428.service: Deactivated successfully. Jan 29 11:47:40.430092 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:47:40.432568 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:47:40.433718 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Jan 29 11:47:40.436715 systemd-logind[1425]: Removed session 4. Jan 29 11:47:40.467740 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.469040 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.473097 systemd-logind[1425]: New session 5 of user core. Jan 29 11:47:40.489627 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:47:40.559469 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:47:40.560056 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:47:40.573390 sudo[1576]: pam_unix(sudo:session): session closed for user root Jan 29 11:47:40.575904 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 29 11:47:40.592147 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:58430.service: Deactivated successfully. Jan 29 11:47:40.597476 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:47:40.599891 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:47:40.603692 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Jan 29 11:47:40.604849 systemd-logind[1425]: Removed session 5. Jan 29 11:47:40.611629 kernel: hrtimer: interrupt took 8561154 ns Jan 29 11:47:40.636863 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.637443 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.641434 systemd-logind[1425]: New session 6 of user core. Jan 29 11:47:40.651572 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:47:40.704359 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:47:40.704665 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:47:40.707696 sudo[1585]: pam_unix(sudo:session): session closed for user root Jan 29 11:47:40.712246 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:47:40.712665 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:47:40.728724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:47:40.729895 auditctl[1588]: No rules Jan 29 11:47:40.730753 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:47:40.730975 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:47:40.736759 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:47:40.762095 augenrules[1606]: No rules Jan 29 11:47:40.763216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:47:40.765909 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 29 11:47:40.768278 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 29 11:47:40.774630 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:58434.service: Deactivated successfully. Jan 29 11:47:40.775960 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:47:40.777488 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:47:40.784738 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:58440.service - OpenSSH per-connection server daemon (10.0.0.1:58440). Jan 29 11:47:40.786520 systemd-logind[1425]: Removed session 6. Jan 29 11:47:40.815230 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 58440 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:47:40.816779 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:47:40.820214 systemd-logind[1425]: New session 7 of user core. Jan 29 11:47:40.828567 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:47:40.879917 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:47:40.880951 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:47:41.190662 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:47:41.190775 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:47:41.441261 dockerd[1635]: time="2025-01-29T11:47:41.440855813Z" level=info msg="Starting up" Jan 29 11:47:41.586578 dockerd[1635]: time="2025-01-29T11:47:41.586531510Z" level=info msg="Loading containers: start." Jan 29 11:47:41.672199 kernel: Initializing XFRM netlink socket Jan 29 11:47:41.739522 systemd-networkd[1383]: docker0: Link UP Jan 29 11:47:41.765743 dockerd[1635]: time="2025-01-29T11:47:41.765692276Z" level=info msg="Loading containers: done." Jan 29 11:47:41.778907 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3859713270-merged.mount: Deactivated successfully. Jan 29 11:47:41.781125 dockerd[1635]: time="2025-01-29T11:47:41.780483967Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:47:41.781125 dockerd[1635]: time="2025-01-29T11:47:41.780716264Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:47:41.781125 dockerd[1635]: time="2025-01-29T11:47:41.780846870Z" level=info msg="Daemon has completed initialization" Jan 29 11:47:41.812331 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:47:41.812641 dockerd[1635]: time="2025-01-29T11:47:41.812058788Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:47:42.343816 containerd[1439]: time="2025-01-29T11:47:42.343768618Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:47:43.148987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579933714.mount: Deactivated successfully. Jan 29 11:47:44.632567 containerd[1439]: time="2025-01-29T11:47:44.632519035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:44.634646 containerd[1439]: time="2025-01-29T11:47:44.634609563Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 29 11:47:44.636492 containerd[1439]: time="2025-01-29T11:47:44.635660473Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:44.638909 containerd[1439]: time="2025-01-29T11:47:44.638878541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:44.640321 containerd[1439]: time="2025-01-29T11:47:44.640183529Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.296371222s" Jan 29 11:47:44.640409 containerd[1439]: time="2025-01-29T11:47:44.640323728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 11:47:44.641012 containerd[1439]: time="2025-01-29T11:47:44.640984138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:47:45.319076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:47:45.328584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:47:45.421406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:47:45.425934 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:47:45.467114 kubelet[1845]: E0129 11:47:45.467045 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:47:45.469283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:47:45.469437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:47:46.064394 containerd[1439]: time="2025-01-29T11:47:46.064342597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:46.064881 containerd[1439]: time="2025-01-29T11:47:46.064830502Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 29 11:47:46.065536 containerd[1439]: time="2025-01-29T11:47:46.065494180Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:46.068761 containerd[1439]: time="2025-01-29T11:47:46.068722016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:46.069998 containerd[1439]: time="2025-01-29T11:47:46.069958977Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.428942582s" Jan 29 11:47:46.070042 containerd[1439]: time="2025-01-29T11:47:46.069996830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 11:47:46.070665 containerd[1439]: time="2025-01-29T11:47:46.070442547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:47:47.392949 containerd[1439]: time="2025-01-29T11:47:47.392901585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:47.394237 containerd[1439]: time="2025-01-29T11:47:47.394205091Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 29 11:47:47.395180 containerd[1439]: time="2025-01-29T11:47:47.395135457Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:47.398445 containerd[1439]: time="2025-01-29T11:47:47.398392617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:47.399141 containerd[1439]: time="2025-01-29T11:47:47.399097894Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.328622436s" Jan 29 11:47:47.399177 containerd[1439]: time="2025-01-29T11:47:47.399140063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 11:47:47.399752 containerd[1439]: time="2025-01-29T11:47:47.399562073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:47:48.670549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222755245.mount: Deactivated successfully. Jan 29 11:47:48.876321 containerd[1439]: time="2025-01-29T11:47:48.876254493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:48.877050 containerd[1439]: time="2025-01-29T11:47:48.877002535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 29 11:47:48.877476 containerd[1439]: time="2025-01-29T11:47:48.877437941Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:48.879347 containerd[1439]: time="2025-01-29T11:47:48.879307906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:48.879997 containerd[1439]: time="2025-01-29T11:47:48.879956723Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.480364401s" Jan 29 11:47:48.880028 containerd[1439]: time="2025-01-29T11:47:48.879994865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 11:47:48.880454 containerd[1439]: time="2025-01-29T11:47:48.880428105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:47:49.720827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191359821.mount: Deactivated successfully. Jan 29 11:47:50.661023 containerd[1439]: time="2025-01-29T11:47:50.660957813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:50.662294 containerd[1439]: time="2025-01-29T11:47:50.662260605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:47:50.663558 containerd[1439]: time="2025-01-29T11:47:50.663533495Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:50.668457 containerd[1439]: time="2025-01-29T11:47:50.668381517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:50.669520 containerd[1439]: time="2025-01-29T11:47:50.669473116Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.789012811s" Jan 29 11:47:50.669520 containerd[1439]: time="2025-01-29T11:47:50.669517086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:47:50.670485 containerd[1439]: time="2025-01-29T11:47:50.670454849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:47:51.283107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445708558.mount: Deactivated successfully. Jan 29 11:47:51.287115 containerd[1439]: time="2025-01-29T11:47:51.287072955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:51.287900 containerd[1439]: time="2025-01-29T11:47:51.287715709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:47:51.288577 containerd[1439]: time="2025-01-29T11:47:51.288541511Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:51.290923 containerd[1439]: time="2025-01-29T11:47:51.290883515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:51.291904 containerd[1439]: time="2025-01-29T11:47:51.291871888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 621.383452ms" Jan 29 11:47:51.291974 containerd[1439]: time="2025-01-29T11:47:51.291906230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:47:51.292340 containerd[1439]: time="2025-01-29T11:47:51.292312279Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:47:52.008953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110074531.mount: Deactivated successfully. Jan 29 11:47:54.408696 containerd[1439]: time="2025-01-29T11:47:54.408632376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:54.409257 containerd[1439]: time="2025-01-29T11:47:54.409225450Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 29 11:47:54.410256 containerd[1439]: time="2025-01-29T11:47:54.410218485Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:54.413477 containerd[1439]: time="2025-01-29T11:47:54.413400593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:47:54.414896 containerd[1439]: time="2025-01-29T11:47:54.414864955Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.122520584s" Jan 29 11:47:54.414953 containerd[1439]: time="2025-01-29T11:47:54.414902200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 11:47:55.569008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:47:55.578623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:47:55.664742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:47:55.667515 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:47:55.699130 kubelet[2001]: E0129 11:47:55.699078 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:47:55.701656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:47:55.701799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:47:59.957627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:47:59.969656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:47:59.988799 systemd[1]: Reloading requested from client PID 2016 ('systemctl') (unit session-7.scope)... Jan 29 11:47:59.988819 systemd[1]: Reloading... Jan 29 11:48:00.061460 zram_generator::config[2058]: No configuration found. Jan 29 11:48:00.169134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:48:00.221394 systemd[1]: Reloading finished in 232 ms. Jan 29 11:48:00.269129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:00.271458 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:48:00.271640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:00.273053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:00.361608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:00.364942 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:48:00.399249 kubelet[2102]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:00.399249 kubelet[2102]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:48:00.399249 kubelet[2102]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:00.399589 kubelet[2102]: I0129 11:48:00.399554 2102 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:48:01.242041 kubelet[2102]: I0129 11:48:01.241986 2102 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:48:01.242041 kubelet[2102]: I0129 11:48:01.242022 2102 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:48:01.242272 kubelet[2102]: I0129 11:48:01.242247 2102 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:48:01.272235 kubelet[2102]: E0129 11:48:01.272192 2102 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:01.273037 kubelet[2102]: I0129 11:48:01.273011 2102 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:48:01.281975 kubelet[2102]: E0129 11:48:01.281926 2102 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:48:01.281975 kubelet[2102]: I0129 11:48:01.281958 2102 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:48:01.285269 kubelet[2102]: I0129 11:48:01.285236 2102 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:48:01.285699 kubelet[2102]: I0129 11:48:01.285675 2102 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:48:01.285826 kubelet[2102]: I0129 11:48:01.285794 2102 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:48:01.285992 kubelet[2102]: I0129 11:48:01.285822 2102 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:48:01.286125 kubelet[2102]: I0129 11:48:01.286114 2102 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:48:01.286125 kubelet[2102]: I0129 11:48:01.286125 2102 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:48:01.286333 kubelet[2102]: I0129 11:48:01.286301 2102 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:01.288049 kubelet[2102]: I0129 11:48:01.288018 2102 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:48:01.288083 kubelet[2102]: I0129 11:48:01.288049 2102 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:48:01.288167 kubelet[2102]: I0129 11:48:01.288148 2102 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:48:01.288167 kubelet[2102]: I0129 11:48:01.288163 2102 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:48:01.292165 kubelet[2102]: W0129 11:48:01.292119 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:01.292358 kubelet[2102]: I0129 11:48:01.292248 2102 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:48:01.292358 kubelet[2102]: E0129 11:48:01.292285 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:01.292358 kubelet[2102]: W0129 11:48:01.292121 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:01.292358 kubelet[2102]: E0129 11:48:01.292339 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:01.294018 kubelet[2102]: I0129 11:48:01.294002 2102 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:48:01.295032 kubelet[2102]: W0129 11:48:01.295005 2102 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:48:01.295785 kubelet[2102]: I0129 11:48:01.295735 2102 server.go:1269] "Started kubelet" Jan 29 11:48:01.297250 kubelet[2102]: I0129 11:48:01.296693 2102 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:48:01.297250 kubelet[2102]: I0129 11:48:01.296985 2102 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:48:01.297250 kubelet[2102]: I0129 11:48:01.297106 2102 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:48:01.297250 kubelet[2102]: I0129 11:48:01.297149 2102 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:48:01.298657 kubelet[2102]: I0129 11:48:01.298350 2102 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:48:01.299440 kubelet[2102]: I0129 11:48:01.299223 2102 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:48:01.301649 kubelet[2102]: I0129 11:48:01.300778 2102 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:48:01.301649 kubelet[2102]: I0129 11:48:01.300882 2102 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:48:01.301649 kubelet[2102]: I0129 11:48:01.300927 2102 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:48:01.301649 kubelet[2102]: I0129 11:48:01.301097 2102 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:48:01.301649 kubelet[2102]: I0129 11:48:01.301212 2102 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:48:01.301649 kubelet[2102]: W0129 11:48:01.301221 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:01.301649 kubelet[2102]: E0129 11:48:01.301260 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:01.301893 kubelet[2102]: E0129 11:48:01.301691 2102 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:01.301893 kubelet[2102]: E0129 11:48:01.301765 2102 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Jan 29 11:48:01.301893 kubelet[2102]: E0129 11:48:01.301842 2102 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:48:01.302882 kubelet[2102]: I0129 11:48:01.302860 2102 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:48:01.303553 kubelet[2102]: E0129 11:48:01.301285 2102 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2761978555ed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:48:01.295709677 +0000 UTC m=+0.927801031,LastTimestamp:2025-01-29 11:48:01.295709677 +0000 UTC m=+0.927801031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:48:01.315837 kubelet[2102]: I0129 11:48:01.315814 2102 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:48:01.316046 kubelet[2102]: I0129 11:48:01.316033 2102 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:48:01.316118 kubelet[2102]: I0129 11:48:01.316108 2102 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:01.319003 kubelet[2102]: I0129 11:48:01.318885 2102 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:48:01.319454 kubelet[2102]: I0129 11:48:01.319144 2102 policy_none.go:49] "None policy: Start" Jan 29 11:48:01.320146 kubelet[2102]: I0129 11:48:01.320067 2102 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:48:01.320146 kubelet[2102]: I0129 11:48:01.320091 2102 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:48:01.320146 kubelet[2102]: I0129 11:48:01.320109 2102 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:48:01.320262 kubelet[2102]: E0129 11:48:01.320150 2102 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:48:01.320460 kubelet[2102]: I0129 11:48:01.320405 2102 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:48:01.320460 kubelet[2102]: I0129 11:48:01.320442 2102 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:48:01.321831 kubelet[2102]: W0129 11:48:01.321751 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:01.321831 kubelet[2102]: E0129 11:48:01.321798 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:01.325583 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:48:01.340014 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:48:01.353801 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:48:01.355009 kubelet[2102]: I0129 11:48:01.354984 2102 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:48:01.355280 kubelet[2102]: I0129 11:48:01.355176 2102 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:48:01.355280 kubelet[2102]: I0129 11:48:01.355194 2102 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:48:01.355949 kubelet[2102]: I0129 11:48:01.355575 2102 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:48:01.358192 kubelet[2102]: E0129 11:48:01.358171 2102 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:48:01.428746 systemd[1]: Created slice kubepods-burstable-pod2438be0710a2eff3633ee4d0161bc172.slice - libcontainer container kubepods-burstable-pod2438be0710a2eff3633ee4d0161bc172.slice. Jan 29 11:48:01.449338 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:48:01.453349 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:48:01.456816 kubelet[2102]: I0129 11:48:01.456783 2102 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:01.457349 kubelet[2102]: E0129 11:48:01.457308 2102 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 29 11:48:01.502688 kubelet[2102]: E0129 11:48:01.502576 2102 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Jan 29 11:48:01.601812 kubelet[2102]: I0129 11:48:01.601762 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:01.601812 kubelet[2102]: I0129 11:48:01.601805 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:01.601812 kubelet[2102]: I0129 11:48:01.601826 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:01.602044 kubelet[2102]: I0129 11:48:01.601842 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:01.602044 kubelet[2102]: I0129 11:48:01.601863 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:48:01.602044 kubelet[2102]: I0129 11:48:01.601877 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:01.602044 kubelet[2102]: I0129 11:48:01.601895 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:01.602044 kubelet[2102]: I0129 11:48:01.601937 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:01.602147 kubelet[2102]: I0129 11:48:01.601968 2102 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:01.659191 kubelet[2102]: I0129 11:48:01.659151 2102 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:01.659509 kubelet[2102]: E0129 11:48:01.659462 2102 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 29 11:48:01.747086 kubelet[2102]: E0129 11:48:01.747005 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:01.747695 containerd[1439]: time="2025-01-29T11:48:01.747648474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2438be0710a2eff3633ee4d0161bc172,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:01.752159 kubelet[2102]: E0129 11:48:01.752139 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:01.752498 containerd[1439]: time="2025-01-29T11:48:01.752469914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:01.755963 kubelet[2102]: E0129 11:48:01.755809 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:01.761457 containerd[1439]: time="2025-01-29T11:48:01.760157188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:01.903673 kubelet[2102]: E0129 11:48:01.903630 2102 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Jan 29 11:48:02.061494 kubelet[2102]: I0129 11:48:02.061366 2102 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:02.061726 kubelet[2102]: E0129 11:48:02.061700 2102 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 29 11:48:02.323269 kubelet[2102]: W0129 11:48:02.323179 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:02.323269 kubelet[2102]: E0129 11:48:02.323228 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:02.323810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403116257.mount: Deactivated successfully. Jan 29 11:48:02.328982 containerd[1439]: time="2025-01-29T11:48:02.328945600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:02.332466 containerd[1439]: time="2025-01-29T11:48:02.332405671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:02.333076 containerd[1439]: time="2025-01-29T11:48:02.333035932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:48:02.333903 containerd[1439]: time="2025-01-29T11:48:02.333874279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:48:02.334529 containerd[1439]: time="2025-01-29T11:48:02.334491774Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:02.336443 containerd[1439]: time="2025-01-29T11:48:02.335900917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:48:02.338077 containerd[1439]: time="2025-01-29T11:48:02.338043444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:02.340563 containerd[1439]: time="2025-01-29T11:48:02.340532954Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.316298ms" Jan 29 11:48:02.341305 containerd[1439]: time="2025-01-29T11:48:02.341276781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.543867ms" Jan 29 11:48:02.342473 containerd[1439]: time="2025-01-29T11:48:02.342394764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:48:02.344011 containerd[1439]: time="2025-01-29T11:48:02.343982181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.441194ms" Jan 29 11:48:02.424022 kubelet[2102]: W0129 11:48:02.423955 2102 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Jan 29 11:48:02.424174 kubelet[2102]: E0129 11:48:02.424157 2102 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:48:02.488390 containerd[1439]: time="2025-01-29T11:48:02.487696719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:02.488390 containerd[1439]: time="2025-01-29T11:48:02.487750781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:02.488390 containerd[1439]: time="2025-01-29T11:48:02.487765267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.488390 containerd[1439]: time="2025-01-29T11:48:02.487879235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.489396 containerd[1439]: time="2025-01-29T11:48:02.489084173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:02.489396 containerd[1439]: time="2025-01-29T11:48:02.489129992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:02.489396 containerd[1439]: time="2025-01-29T11:48:02.489144958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.489640 containerd[1439]: time="2025-01-29T11:48:02.489254203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:02.489640 containerd[1439]: time="2025-01-29T11:48:02.489290018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:02.489640 containerd[1439]: time="2025-01-29T11:48:02.489300383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.489640 containerd[1439]: time="2025-01-29T11:48:02.489363889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.489896 containerd[1439]: time="2025-01-29T11:48:02.489836244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:02.510576 systemd[1]: Started cri-containerd-16c8dcee8eb5e137eaefb6c7a992f735d2eb0fc65afb8dc74d1feed95cd4ab2d.scope - libcontainer container 16c8dcee8eb5e137eaefb6c7a992f735d2eb0fc65afb8dc74d1feed95cd4ab2d. Jan 29 11:48:02.514726 systemd[1]: Started cri-containerd-525beeaf37fa8d96d3e5d37b75712b7213ae4513ea2ab209d14032fd6c228b4c.scope - libcontainer container 525beeaf37fa8d96d3e5d37b75712b7213ae4513ea2ab209d14032fd6c228b4c. Jan 29 11:48:02.516077 systemd[1]: Started cri-containerd-bdc578d49cf2f889d9da6b5e59a636a62f19357b994301882ee015d54df7abb7.scope - libcontainer container bdc578d49cf2f889d9da6b5e59a636a62f19357b994301882ee015d54df7abb7. Jan 29 11:48:02.546613 containerd[1439]: time="2025-01-29T11:48:02.546569876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"525beeaf37fa8d96d3e5d37b75712b7213ae4513ea2ab209d14032fd6c228b4c\"" Jan 29 11:48:02.548345 kubelet[2102]: E0129 11:48:02.548154 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:02.551731 containerd[1439]: time="2025-01-29T11:48:02.551695197Z" level=info msg="CreateContainer within sandbox \"525beeaf37fa8d96d3e5d37b75712b7213ae4513ea2ab209d14032fd6c228b4c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:48:02.554122 containerd[1439]: time="2025-01-29T11:48:02.554092829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2438be0710a2eff3633ee4d0161bc172,Namespace:kube-system,Attempt:0,} returns sandbox id \"16c8dcee8eb5e137eaefb6c7a992f735d2eb0fc65afb8dc74d1feed95cd4ab2d\"" Jan 29 11:48:02.554770 kubelet[2102]: E0129 11:48:02.554752 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:02.556747 containerd[1439]: time="2025-01-29T11:48:02.556719716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc578d49cf2f889d9da6b5e59a636a62f19357b994301882ee015d54df7abb7\"" Jan 29 11:48:02.557228 kubelet[2102]: E0129 11:48:02.557197 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:02.557621 containerd[1439]: time="2025-01-29T11:48:02.557590956Z" level=info msg="CreateContainer within sandbox \"16c8dcee8eb5e137eaefb6c7a992f735d2eb0fc65afb8dc74d1feed95cd4ab2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:48:02.558864 containerd[1439]: time="2025-01-29T11:48:02.558834270Z" level=info msg="CreateContainer within sandbox \"bdc578d49cf2f889d9da6b5e59a636a62f19357b994301882ee015d54df7abb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:48:02.571944 containerd[1439]: time="2025-01-29T11:48:02.571793152Z" level=info msg="CreateContainer within sandbox \"525beeaf37fa8d96d3e5d37b75712b7213ae4513ea2ab209d14032fd6c228b4c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"72d07ec7eb5c0660e89593abfed7cf9e19e13bcc3ec22753671f401a204bb55b\"" Jan 29 11:48:02.572633 containerd[1439]: time="2025-01-29T11:48:02.572607529Z" level=info msg="StartContainer for \"72d07ec7eb5c0660e89593abfed7cf9e19e13bcc3ec22753671f401a204bb55b\"" Jan 29 11:48:02.577473 containerd[1439]: time="2025-01-29T11:48:02.577355653Z" level=info msg="CreateContainer within sandbox \"16c8dcee8eb5e137eaefb6c7a992f735d2eb0fc65afb8dc74d1feed95cd4ab2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3fafa9cd471ce4a498c5ecd9639aa0e50dbaef73fdbb129b42a6a203b7a1144\"" Jan 29 11:48:02.580317 containerd[1439]: time="2025-01-29T11:48:02.578736304Z" level=info msg="StartContainer for \"f3fafa9cd471ce4a498c5ecd9639aa0e50dbaef73fdbb129b42a6a203b7a1144\"" Jan 29 11:48:02.584064 containerd[1439]: time="2025-01-29T11:48:02.584029654Z" level=info msg="CreateContainer within sandbox \"bdc578d49cf2f889d9da6b5e59a636a62f19357b994301882ee015d54df7abb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"605ec04ffe87c8e3b06d630bcb0fecc0bdbb44c8960986ffed2eb4e6914f18c4\"" Jan 29 11:48:02.584654 containerd[1439]: time="2025-01-29T11:48:02.584620699Z" level=info msg="StartContainer for \"605ec04ffe87c8e3b06d630bcb0fecc0bdbb44c8960986ffed2eb4e6914f18c4\"" Jan 29 11:48:02.598568 systemd[1]: Started cri-containerd-72d07ec7eb5c0660e89593abfed7cf9e19e13bcc3ec22753671f401a204bb55b.scope - libcontainer container 72d07ec7eb5c0660e89593abfed7cf9e19e13bcc3ec22753671f401a204bb55b. Jan 29 11:48:02.601865 systemd[1]: Started cri-containerd-f3fafa9cd471ce4a498c5ecd9639aa0e50dbaef73fdbb129b42a6a203b7a1144.scope - libcontainer container f3fafa9cd471ce4a498c5ecd9639aa0e50dbaef73fdbb129b42a6a203b7a1144. Jan 29 11:48:02.609057 systemd[1]: Started cri-containerd-605ec04ffe87c8e3b06d630bcb0fecc0bdbb44c8960986ffed2eb4e6914f18c4.scope - libcontainer container 605ec04ffe87c8e3b06d630bcb0fecc0bdbb44c8960986ffed2eb4e6914f18c4. Jan 29 11:48:02.652173 containerd[1439]: time="2025-01-29T11:48:02.652001216Z" level=info msg="StartContainer for \"72d07ec7eb5c0660e89593abfed7cf9e19e13bcc3ec22753671f401a204bb55b\" returns successfully" Jan 29 11:48:02.652173 containerd[1439]: time="2025-01-29T11:48:02.652108620Z" level=info msg="StartContainer for \"f3fafa9cd471ce4a498c5ecd9639aa0e50dbaef73fdbb129b42a6a203b7a1144\" returns successfully" Jan 29 11:48:02.672672 containerd[1439]: time="2025-01-29T11:48:02.672622788Z" level=info msg="StartContainer for \"605ec04ffe87c8e3b06d630bcb0fecc0bdbb44c8960986ffed2eb4e6914f18c4\" returns successfully" Jan 29 11:48:02.704921 kubelet[2102]: E0129 11:48:02.704874 2102 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Jan 29 11:48:02.864394 kubelet[2102]: I0129 11:48:02.863428 2102 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:03.335090 kubelet[2102]: E0129 11:48:03.334911 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:03.336552 kubelet[2102]: E0129 11:48:03.336472 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:03.340942 kubelet[2102]: E0129 11:48:03.340925 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:04.342571 kubelet[2102]: E0129 11:48:04.342541 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:04.789346 kubelet[2102]: E0129 11:48:04.789237 2102 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:48:04.875805 kubelet[2102]: I0129 11:48:04.875755 2102 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:48:04.875805 kubelet[2102]: E0129 11:48:04.875817 2102 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:48:05.290014 kubelet[2102]: I0129 11:48:05.289760 2102 apiserver.go:52] "Watching apiserver" Jan 29 11:48:05.301230 kubelet[2102]: I0129 11:48:05.300994 2102 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:48:05.667637 kubelet[2102]: E0129 11:48:05.667601 2102 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:05.667985 kubelet[2102]: E0129 11:48:05.667773 2102 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:06.836140 systemd[1]: Reloading requested from client PID 2380 ('systemctl') (unit session-7.scope)... Jan 29 11:48:06.836464 systemd[1]: Reloading... Jan 29 11:48:06.898469 zram_generator::config[2422]: No configuration found. Jan 29 11:48:06.987895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:48:07.052599 systemd[1]: Reloading finished in 215 ms. Jan 29 11:48:07.083150 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:07.092914 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:48:07.093090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:07.093131 systemd[1]: kubelet.service: Consumed 1.299s CPU time, 119.1M memory peak, 0B memory swap peak. Jan 29 11:48:07.105179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:48:07.205267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:48:07.210066 (kubelet)[2461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:48:07.249718 kubelet[2461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:07.249718 kubelet[2461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:48:07.249718 kubelet[2461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:48:07.250062 kubelet[2461]: I0129 11:48:07.249767 2461 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:48:07.257757 kubelet[2461]: I0129 11:48:07.257715 2461 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:48:07.257757 kubelet[2461]: I0129 11:48:07.257756 2461 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:48:07.258052 kubelet[2461]: I0129 11:48:07.258032 2461 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:48:07.260002 kubelet[2461]: I0129 11:48:07.259977 2461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:48:07.262037 kubelet[2461]: I0129 11:48:07.261940 2461 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:48:07.265290 kubelet[2461]: E0129 11:48:07.265244 2461 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:48:07.265350 kubelet[2461]: I0129 11:48:07.265323 2461 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:48:07.267717 kubelet[2461]: I0129 11:48:07.267687 2461 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:48:07.267842 kubelet[2461]: I0129 11:48:07.267815 2461 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:48:07.267971 kubelet[2461]: I0129 11:48:07.267935 2461 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:48:07.268150 kubelet[2461]: I0129 11:48:07.267963 2461 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:48:07.268221 kubelet[2461]: I0129 11:48:07.268150 2461 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:48:07.268221 kubelet[2461]: I0129 11:48:07.268160 2461 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:48:07.268221 kubelet[2461]: I0129 11:48:07.268187 2461 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:07.268316 kubelet[2461]: I0129 11:48:07.268301 2461 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:48:07.268342 kubelet[2461]: I0129 11:48:07.268321 2461 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:48:07.268342 kubelet[2461]: I0129 11:48:07.268341 2461 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:48:07.268400 kubelet[2461]: I0129 11:48:07.268362 2461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:48:07.268886 kubelet[2461]: I0129 11:48:07.268864 2461 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:48:07.276440 kubelet[2461]: I0129 11:48:07.275584 2461 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:48:07.276440 kubelet[2461]: I0129 11:48:07.276073 2461 server.go:1269] "Started kubelet" Jan 29 11:48:07.279691 kubelet[2461]: I0129 11:48:07.279627 2461 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:48:07.280366 kubelet[2461]: I0129 11:48:07.280334 2461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:48:07.280866 kubelet[2461]: I0129 11:48:07.280810 2461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:48:07.281132 kubelet[2461]: I0129 11:48:07.281114 2461 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:48:07.281235 kubelet[2461]: I0129 11:48:07.281116 2461 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:48:07.282394 kubelet[2461]: I0129 11:48:07.281160 2461 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:48:07.282560 kubelet[2461]: I0129 11:48:07.282389 2461 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:48:07.283524 kubelet[2461]: I0129 11:48:07.282398 2461 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:48:07.283819 kubelet[2461]: E0129 11:48:07.282435 2461 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:48:07.283981 kubelet[2461]: I0129 11:48:07.283953 2461 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:48:07.284970 kubelet[2461]: I0129 11:48:07.284941 2461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:48:07.303961 kubelet[2461]: I0129 11:48:07.303571 2461 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:48:07.303961 kubelet[2461]: I0129 11:48:07.303722 2461 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:48:07.304273 kubelet[2461]: E0129 11:48:07.304250 2461 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:48:07.309693 kubelet[2461]: I0129 11:48:07.309657 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:48:07.311894 kubelet[2461]: I0129 11:48:07.311510 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:48:07.311894 kubelet[2461]: I0129 11:48:07.311539 2461 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:48:07.311894 kubelet[2461]: I0129 11:48:07.311556 2461 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:48:07.311894 kubelet[2461]: E0129 11:48:07.311600 2461 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:48:07.337152 kubelet[2461]: I0129 11:48:07.337106 2461 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:48:07.337152 kubelet[2461]: I0129 11:48:07.337130 2461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:48:07.337152 kubelet[2461]: I0129 11:48:07.337151 2461 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:48:07.337368 kubelet[2461]: I0129 11:48:07.337332 2461 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:48:07.337405 kubelet[2461]: I0129 11:48:07.337350 2461 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:48:07.337405 kubelet[2461]: I0129 11:48:07.337394 2461 policy_none.go:49] "None policy: Start" Jan 29 11:48:07.340499 kubelet[2461]: I0129 11:48:07.340467 2461 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:48:07.340499 kubelet[2461]: I0129 11:48:07.340499 2461 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:48:07.340755 kubelet[2461]: I0129 11:48:07.340726 2461 state_mem.go:75] "Updated machine memory state" Jan 29 11:48:07.345157 kubelet[2461]: I0129 11:48:07.345084 2461 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:48:07.345244 kubelet[2461]: I0129 11:48:07.345228 2461 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:48:07.345281 kubelet[2461]: I0129 11:48:07.345243 2461 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:48:07.345663 kubelet[2461]: I0129 11:48:07.345646 2461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:48:07.450278 kubelet[2461]: I0129 11:48:07.450149 2461 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:48:07.457240 kubelet[2461]: I0129 11:48:07.457209 2461 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:48:07.457355 kubelet[2461]: I0129 11:48:07.457293 2461 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:48:07.485089 kubelet[2461]: I0129 11:48:07.485046 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:07.485089 kubelet[2461]: I0129 11:48:07.485090 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:07.485224 kubelet[2461]: I0129 11:48:07.485129 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:07.485224 kubelet[2461]: I0129 11:48:07.485149 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:07.485224 kubelet[2461]: I0129 11:48:07.485169 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:07.485224 kubelet[2461]: I0129 11:48:07.485184 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:07.485322 kubelet[2461]: I0129 11:48:07.485237 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2438be0710a2eff3633ee4d0161bc172-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2438be0710a2eff3633ee4d0161bc172\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:07.485322 kubelet[2461]: I0129 11:48:07.485291 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:48:07.485322 kubelet[2461]: I0129 11:48:07.485308 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:48:07.718472 kubelet[2461]: E0129 11:48:07.718040 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:07.718472 kubelet[2461]: E0129 11:48:07.718100 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:07.718472 kubelet[2461]: E0129 11:48:07.718182 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:07.842113 sudo[2495]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:48:07.842396 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:48:08.270160 kubelet[2461]: I0129 11:48:08.268968 2461 apiserver.go:52] "Watching apiserver" Jan 29 11:48:08.281972 sudo[2495]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:08.284666 kubelet[2461]: I0129 11:48:08.284631 2461 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:48:08.324455 kubelet[2461]: E0129 11:48:08.324211 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:08.324455 kubelet[2461]: E0129 11:48:08.324277 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:08.331439 kubelet[2461]: E0129 11:48:08.331075 2461 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:48:08.331439 kubelet[2461]: E0129 11:48:08.331234 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:08.354912 kubelet[2461]: I0129 11:48:08.354653 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.354636377 podStartE2EDuration="1.354636377s" podCreationTimestamp="2025-01-29 11:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:08.346853005 +0000 UTC m=+1.133813493" watchObservedRunningTime="2025-01-29 11:48:08.354636377 +0000 UTC m=+1.141596865" Jan 29 11:48:08.354912 kubelet[2461]: I0129 11:48:08.354793 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.354787332 podStartE2EDuration="1.354787332s" podCreationTimestamp="2025-01-29 11:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:08.354773969 +0000 UTC m=+1.141734457" watchObservedRunningTime="2025-01-29 11:48:08.354787332 +0000 UTC m=+1.141747780" Jan 29 11:48:08.361580 kubelet[2461]: I0129 11:48:08.361523 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.361511017 podStartE2EDuration="1.361511017s" podCreationTimestamp="2025-01-29 11:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:08.361033986 +0000 UTC m=+1.147994474" watchObservedRunningTime="2025-01-29 11:48:08.361511017 +0000 UTC m=+1.148471465" Jan 29 11:48:09.326043 kubelet[2461]: E0129 11:48:09.326003 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:10.327236 kubelet[2461]: E0129 11:48:10.327197 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:10.331209 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 29 11:48:10.332773 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:10.336508 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:58440.service: Deactivated successfully. Jan 29 11:48:10.339251 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:48:10.339445 systemd[1]: session-7.scope: Consumed 8.134s CPU time, 151.0M memory peak, 0B memory swap peak. Jan 29 11:48:10.339977 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:48:10.341007 systemd-logind[1425]: Removed session 7. Jan 29 11:48:11.328142 kubelet[2461]: E0129 11:48:11.328085 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:12.289341 kubelet[2461]: I0129 11:48:12.289082 2461 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:48:12.289899 containerd[1439]: time="2025-01-29T11:48:12.289784751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:48:12.290261 kubelet[2461]: I0129 11:48:12.290061 2461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:48:12.483344 kubelet[2461]: E0129 11:48:12.483310 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023843 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-xtables-lock\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023876 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eda44065-72f3-4f6d-89e1-a5ac5b93b54f-xtables-lock\") pod \"kube-proxy-t7s5d\" (UID: \"eda44065-72f3-4f6d-89e1-a5ac5b93b54f\") " pod="kube-system/kube-proxy-t7s5d" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023895 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hostproc\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023911 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cni-path\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023925 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-etc-cni-netd\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.025576 kubelet[2461]: I0129 11:48:13.023939 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eda44065-72f3-4f6d-89e1-a5ac5b93b54f-lib-modules\") pod \"kube-proxy-t7s5d\" (UID: \"eda44065-72f3-4f6d-89e1-a5ac5b93b54f\") " pod="kube-system/kube-proxy-t7s5d" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.023952 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hubble-tls\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.023968 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-net\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.023982 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eda44065-72f3-4f6d-89e1-a5ac5b93b54f-kube-proxy\") pod \"kube-proxy-t7s5d\" (UID: \"eda44065-72f3-4f6d-89e1-a5ac5b93b54f\") " pod="kube-system/kube-proxy-t7s5d" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.024256 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-lib-modules\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.024286 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f2e5199-d72b-4dfe-a23f-f7425f64524d-clustermesh-secrets\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027487 kubelet[2461]: I0129 11:48:13.024305 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-kernel\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027686 kubelet[2461]: I0129 11:48:13.024339 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4l7g\" (UniqueName: \"kubernetes.io/projected/eda44065-72f3-4f6d-89e1-a5ac5b93b54f-kube-api-access-n4l7g\") pod \"kube-proxy-t7s5d\" (UID: \"eda44065-72f3-4f6d-89e1-a5ac5b93b54f\") " pod="kube-system/kube-proxy-t7s5d" Jan 29 11:48:13.027686 kubelet[2461]: I0129 11:48:13.024359 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-cgroup\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027686 kubelet[2461]: I0129 11:48:13.024374 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjgk\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-kube-api-access-9bjgk\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027686 kubelet[2461]: I0129 11:48:13.024391 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-run\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027686 kubelet[2461]: I0129 11:48:13.024423 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-bpf-maps\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.027807 kubelet[2461]: I0129 11:48:13.024442 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-config-path\") pod \"cilium-8hwgp\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " pod="kube-system/cilium-8hwgp" Jan 29 11:48:13.030799 systemd[1]: Created slice kubepods-besteffort-podeda44065_72f3_4f6d_89e1_a5ac5b93b54f.slice - libcontainer container kubepods-besteffort-podeda44065_72f3_4f6d_89e1_a5ac5b93b54f.slice. Jan 29 11:48:13.043609 systemd[1]: Created slice kubepods-burstable-pod9f2e5199_d72b_4dfe_a23f_f7425f64524d.slice - libcontainer container kubepods-burstable-pod9f2e5199_d72b_4dfe_a23f_f7425f64524d.slice. Jan 29 11:48:13.326655 kubelet[2461]: I0129 11:48:13.326617 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/daa9d09d-3698-4121-906a-62ee5d21bf1c-kube-api-access-hp45s\") pod \"cilium-operator-5d85765b45-j748w\" (UID: \"daa9d09d-3698-4121-906a-62ee5d21bf1c\") " pod="kube-system/cilium-operator-5d85765b45-j748w" Jan 29 11:48:13.326746 kubelet[2461]: I0129 11:48:13.326660 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daa9d09d-3698-4121-906a-62ee5d21bf1c-cilium-config-path\") pod \"cilium-operator-5d85765b45-j748w\" (UID: \"daa9d09d-3698-4121-906a-62ee5d21bf1c\") " pod="kube-system/cilium-operator-5d85765b45-j748w" Jan 29 11:48:13.327926 systemd[1]: Created slice kubepods-besteffort-poddaa9d09d_3698_4121_906a_62ee5d21bf1c.slice - libcontainer container kubepods-besteffort-poddaa9d09d_3698_4121_906a_62ee5d21bf1c.slice. Jan 29 11:48:13.331005 kubelet[2461]: E0129 11:48:13.330966 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.341211 kubelet[2461]: E0129 11:48:13.340798 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.341642 containerd[1439]: time="2025-01-29T11:48:13.341598093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7s5d,Uid:eda44065-72f3-4f6d-89e1-a5ac5b93b54f,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:13.348244 kubelet[2461]: E0129 11:48:13.347622 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.348356 containerd[1439]: time="2025-01-29T11:48:13.347978100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8hwgp,Uid:9f2e5199-d72b-4dfe-a23f-f7425f64524d,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:13.362191 containerd[1439]: time="2025-01-29T11:48:13.362096874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:13.362191 containerd[1439]: time="2025-01-29T11:48:13.362164966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:13.362191 containerd[1439]: time="2025-01-29T11:48:13.362176368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.362428 containerd[1439]: time="2025-01-29T11:48:13.362244780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.371126 containerd[1439]: time="2025-01-29T11:48:13.370979922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:13.371126 containerd[1439]: time="2025-01-29T11:48:13.371062537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:13.371126 containerd[1439]: time="2025-01-29T11:48:13.371074259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.371356 containerd[1439]: time="2025-01-29T11:48:13.371231487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.380584 systemd[1]: Started cri-containerd-9f6c81e6b9263b2e41e9ffbdf813c0441ec57dfd16e7ff933c916d27cec74c59.scope - libcontainer container 9f6c81e6b9263b2e41e9ffbdf813c0441ec57dfd16e7ff933c916d27cec74c59. Jan 29 11:48:13.383147 systemd[1]: Started cri-containerd-19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84.scope - libcontainer container 19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84. Jan 29 11:48:13.403288 containerd[1439]: time="2025-01-29T11:48:13.403247061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t7s5d,Uid:eda44065-72f3-4f6d-89e1-a5ac5b93b54f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f6c81e6b9263b2e41e9ffbdf813c0441ec57dfd16e7ff933c916d27cec74c59\"" Jan 29 11:48:13.404226 kubelet[2461]: E0129 11:48:13.404203 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.410230 containerd[1439]: time="2025-01-29T11:48:13.410181725Z" level=info msg="CreateContainer within sandbox \"9f6c81e6b9263b2e41e9ffbdf813c0441ec57dfd16e7ff933c916d27cec74c59\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:48:13.411182 containerd[1439]: time="2025-01-29T11:48:13.411157098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8hwgp,Uid:9f2e5199-d72b-4dfe-a23f-f7425f64524d,Namespace:kube-system,Attempt:0,} returns sandbox id \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\"" Jan 29 11:48:13.412215 kubelet[2461]: E0129 11:48:13.412187 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.414145 containerd[1439]: time="2025-01-29T11:48:13.414111819Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:48:13.433923 containerd[1439]: time="2025-01-29T11:48:13.433864748Z" level=info msg="CreateContainer within sandbox \"9f6c81e6b9263b2e41e9ffbdf813c0441ec57dfd16e7ff933c916d27cec74c59\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b66cad619bca57720f985fc4d899ad08deb20e78c0ea7f959089468229113787\"" Jan 29 11:48:13.435565 containerd[1439]: time="2025-01-29T11:48:13.435514199Z" level=info msg="StartContainer for \"b66cad619bca57720f985fc4d899ad08deb20e78c0ea7f959089468229113787\"" Jan 29 11:48:13.462591 systemd[1]: Started cri-containerd-b66cad619bca57720f985fc4d899ad08deb20e78c0ea7f959089468229113787.scope - libcontainer container b66cad619bca57720f985fc4d899ad08deb20e78c0ea7f959089468229113787. Jan 29 11:48:13.488102 containerd[1439]: time="2025-01-29T11:48:13.488001028Z" level=info msg="StartContainer for \"b66cad619bca57720f985fc4d899ad08deb20e78c0ea7f959089468229113787\" returns successfully" Jan 29 11:48:13.625271 kubelet[2461]: E0129 11:48:13.624182 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.631137 kubelet[2461]: E0129 11:48:13.631112 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:13.631882 containerd[1439]: time="2025-01-29T11:48:13.631847792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j748w,Uid:daa9d09d-3698-4121-906a-62ee5d21bf1c,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:13.657119 containerd[1439]: time="2025-01-29T11:48:13.656790436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:13.657119 containerd[1439]: time="2025-01-29T11:48:13.656843526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:13.657119 containerd[1439]: time="2025-01-29T11:48:13.656854528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.657119 containerd[1439]: time="2025-01-29T11:48:13.657069806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:13.677560 systemd[1]: Started cri-containerd-f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b.scope - libcontainer container f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b. Jan 29 11:48:13.722629 containerd[1439]: time="2025-01-29T11:48:13.722561131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j748w,Uid:daa9d09d-3698-4121-906a-62ee5d21bf1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\"" Jan 29 11:48:13.723423 kubelet[2461]: E0129 11:48:13.723390 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:14.338431 kubelet[2461]: E0129 11:48:14.336697 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:14.338431 kubelet[2461]: E0129 11:48:14.337664 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:14.352023 kubelet[2461]: I0129 11:48:14.351955 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t7s5d" podStartSLOduration=1.351928102 podStartE2EDuration="1.351928102s" podCreationTimestamp="2025-01-29 11:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:14.349717892 +0000 UTC m=+7.136678380" watchObservedRunningTime="2025-01-29 11:48:14.351928102 +0000 UTC m=+7.138888590" Jan 29 11:48:18.129500 update_engine[1427]: I20250129 11:48:18.129433 1427 update_attempter.cc:509] Updating boot flags... Jan 29 11:48:18.160449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2837) Jan 29 11:48:18.209449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2839) Jan 29 11:48:18.248499 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2839) Jan 29 11:48:19.382255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145622742.mount: Deactivated successfully. Jan 29 11:48:20.664960 containerd[1439]: time="2025-01-29T11:48:20.664908242Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:48:20.667354 containerd[1439]: time="2025-01-29T11:48:20.667304377Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.253154752s" Jan 29 11:48:20.667354 containerd[1439]: time="2025-01-29T11:48:20.667342941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:48:20.671511 containerd[1439]: time="2025-01-29T11:48:20.671479810Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:48:20.680147 containerd[1439]: time="2025-01-29T11:48:20.680107431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:20.680956 containerd[1439]: time="2025-01-29T11:48:20.680931532Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:20.683237 containerd[1439]: time="2025-01-29T11:48:20.683185809Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:48:20.709764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521714707.mount: Deactivated successfully. Jan 29 11:48:20.722991 containerd[1439]: time="2025-01-29T11:48:20.722943658Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\"" Jan 29 11:48:20.723447 containerd[1439]: time="2025-01-29T11:48:20.723400074Z" level=info msg="StartContainer for \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\"" Jan 29 11:48:20.754564 systemd[1]: Started cri-containerd-dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b.scope - libcontainer container dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b. Jan 29 11:48:20.822171 systemd[1]: cri-containerd-dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b.scope: Deactivated successfully. Jan 29 11:48:20.828918 containerd[1439]: time="2025-01-29T11:48:20.828778390Z" level=info msg="StartContainer for \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\" returns successfully" Jan 29 11:48:20.876798 containerd[1439]: time="2025-01-29T11:48:20.876733166Z" level=info msg="shim disconnected" id=dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b namespace=k8s.io Jan 29 11:48:20.876798 containerd[1439]: time="2025-01-29T11:48:20.876791973Z" level=warning msg="cleaning up after shim disconnected" id=dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b namespace=k8s.io Jan 29 11:48:20.876798 containerd[1439]: time="2025-01-29T11:48:20.876801094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:21.035580 kubelet[2461]: E0129 11:48:21.035037 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:21.353455 kubelet[2461]: E0129 11:48:21.353394 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:21.353604 kubelet[2461]: E0129 11:48:21.353493 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:21.355763 containerd[1439]: time="2025-01-29T11:48:21.355692174Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:48:21.376223 containerd[1439]: time="2025-01-29T11:48:21.376177372Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\"" Jan 29 11:48:21.377366 containerd[1439]: time="2025-01-29T11:48:21.376672350Z" level=info msg="StartContainer for \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\"" Jan 29 11:48:21.402626 systemd[1]: Started cri-containerd-47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684.scope - libcontainer container 47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684. Jan 29 11:48:21.422205 containerd[1439]: time="2025-01-29T11:48:21.422098149Z" level=info msg="StartContainer for \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\" returns successfully" Jan 29 11:48:21.442521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:48:21.443029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:48:21.443309 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:48:21.449709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:48:21.450287 systemd[1]: cri-containerd-47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684.scope: Deactivated successfully. Jan 29 11:48:21.463700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:48:21.483201 containerd[1439]: time="2025-01-29T11:48:21.482931231Z" level=info msg="shim disconnected" id=47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684 namespace=k8s.io Jan 29 11:48:21.483201 containerd[1439]: time="2025-01-29T11:48:21.482981517Z" level=warning msg="cleaning up after shim disconnected" id=47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684 namespace=k8s.io Jan 29 11:48:21.483201 containerd[1439]: time="2025-01-29T11:48:21.482990918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:21.707704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b-rootfs.mount: Deactivated successfully. Jan 29 11:48:22.289700 containerd[1439]: time="2025-01-29T11:48:22.288904717Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:22.290870 containerd[1439]: time="2025-01-29T11:48:22.290793207Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:48:22.292272 containerd[1439]: time="2025-01-29T11:48:22.291488205Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:48:22.293459 containerd[1439]: time="2025-01-29T11:48:22.293426661Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.621901806s" Jan 29 11:48:22.293570 containerd[1439]: time="2025-01-29T11:48:22.293552595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:48:22.296783 containerd[1439]: time="2025-01-29T11:48:22.296751712Z" level=info msg="CreateContainer within sandbox \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:48:22.305723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2465787246.mount: Deactivated successfully. Jan 29 11:48:22.308149 containerd[1439]: time="2025-01-29T11:48:22.308025810Z" level=info msg="CreateContainer within sandbox \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\"" Jan 29 11:48:22.308541 containerd[1439]: time="2025-01-29T11:48:22.308516745Z" level=info msg="StartContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\"" Jan 29 11:48:22.332589 systemd[1]: Started cri-containerd-5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d.scope - libcontainer container 5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d. Jan 29 11:48:22.354629 containerd[1439]: time="2025-01-29T11:48:22.354405585Z" level=info msg="StartContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" returns successfully" Jan 29 11:48:22.360515 kubelet[2461]: E0129 11:48:22.360321 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:22.366965 kubelet[2461]: E0129 11:48:22.366818 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:22.367151 containerd[1439]: time="2025-01-29T11:48:22.367108722Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:48:22.390643 containerd[1439]: time="2025-01-29T11:48:22.390257865Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\"" Jan 29 11:48:22.393454 containerd[1439]: time="2025-01-29T11:48:22.393085420Z" level=info msg="StartContainer for \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\"" Jan 29 11:48:22.426562 systemd[1]: Started cri-containerd-5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080.scope - libcontainer container 5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080. Jan 29 11:48:22.457485 containerd[1439]: time="2025-01-29T11:48:22.455696326Z" level=info msg="StartContainer for \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\" returns successfully" Jan 29 11:48:22.463629 systemd[1]: cri-containerd-5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080.scope: Deactivated successfully. Jan 29 11:48:22.559555 containerd[1439]: time="2025-01-29T11:48:22.559388175Z" level=info msg="shim disconnected" id=5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080 namespace=k8s.io Jan 29 11:48:22.559555 containerd[1439]: time="2025-01-29T11:48:22.559472345Z" level=warning msg="cleaning up after shim disconnected" id=5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080 namespace=k8s.io Jan 29 11:48:22.559555 containerd[1439]: time="2025-01-29T11:48:22.559482426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:23.375702 kubelet[2461]: E0129 11:48:23.374894 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:23.375702 kubelet[2461]: E0129 11:48:23.375048 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:23.379168 containerd[1439]: time="2025-01-29T11:48:23.379119086Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:48:23.394836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1677820667.mount: Deactivated successfully. Jan 29 11:48:23.395317 containerd[1439]: time="2025-01-29T11:48:23.394932209Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\"" Jan 29 11:48:23.398574 containerd[1439]: time="2025-01-29T11:48:23.396760403Z" level=info msg="StartContainer for \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\"" Jan 29 11:48:23.398719 kubelet[2461]: I0129 11:48:23.397045 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j748w" podStartSLOduration=1.826826834 podStartE2EDuration="10.397026392s" podCreationTimestamp="2025-01-29 11:48:13 +0000 UTC" firstStartedPulling="2025-01-29 11:48:13.724053955 +0000 UTC m=+6.511014443" lastFinishedPulling="2025-01-29 11:48:22.294253513 +0000 UTC m=+15.081214001" observedRunningTime="2025-01-29 11:48:22.392254008 +0000 UTC m=+15.179214496" watchObservedRunningTime="2025-01-29 11:48:23.397026392 +0000 UTC m=+16.183986880" Jan 29 11:48:23.425625 systemd[1]: Started cri-containerd-e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d.scope - libcontainer container e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d. Jan 29 11:48:23.444579 systemd[1]: cri-containerd-e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d.scope: Deactivated successfully. Jan 29 11:48:23.449351 containerd[1439]: time="2025-01-29T11:48:23.449292913Z" level=info msg="StartContainer for \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\" returns successfully" Jan 29 11:48:23.466299 containerd[1439]: time="2025-01-29T11:48:23.466241117Z" level=info msg="shim disconnected" id=e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d namespace=k8s.io Jan 29 11:48:23.466299 containerd[1439]: time="2025-01-29T11:48:23.466296843Z" level=warning msg="cleaning up after shim disconnected" id=e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d namespace=k8s.io Jan 29 11:48:23.466299 containerd[1439]: time="2025-01-29T11:48:23.466305804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:48:23.712693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d-rootfs.mount: Deactivated successfully. Jan 29 11:48:24.379005 kubelet[2461]: E0129 11:48:24.378787 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:24.382755 containerd[1439]: time="2025-01-29T11:48:24.382550148Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:48:24.406670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087047364.mount: Deactivated successfully. Jan 29 11:48:24.407707 containerd[1439]: time="2025-01-29T11:48:24.407658338Z" level=info msg="CreateContainer within sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\"" Jan 29 11:48:24.408190 containerd[1439]: time="2025-01-29T11:48:24.408166230Z" level=info msg="StartContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\"" Jan 29 11:48:24.434620 systemd[1]: Started cri-containerd-b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a.scope - libcontainer container b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a. Jan 29 11:48:24.461536 containerd[1439]: time="2025-01-29T11:48:24.460890345Z" level=info msg="StartContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" returns successfully" Jan 29 11:48:24.598100 kubelet[2461]: I0129 11:48:24.598057 2461 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:48:24.678603 systemd[1]: Created slice kubepods-burstable-pod629e5c71_32da_4b48_84fc_84b75a533e6f.slice - libcontainer container kubepods-burstable-pod629e5c71_32da_4b48_84fc_84b75a533e6f.slice. Jan 29 11:48:24.687621 systemd[1]: Created slice kubepods-burstable-podfb12d92a_c125_4415_87b8_9e921473eb39.slice - libcontainer container kubepods-burstable-podfb12d92a_c125_4415_87b8_9e921473eb39.slice. Jan 29 11:48:24.704290 kubelet[2461]: I0129 11:48:24.703961 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/629e5c71-32da-4b48-84fc-84b75a533e6f-config-volume\") pod \"coredns-6f6b679f8f-4phld\" (UID: \"629e5c71-32da-4b48-84fc-84b75a533e6f\") " pod="kube-system/coredns-6f6b679f8f-4phld" Jan 29 11:48:24.704290 kubelet[2461]: I0129 11:48:24.704009 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb7qh\" (UniqueName: \"kubernetes.io/projected/629e5c71-32da-4b48-84fc-84b75a533e6f-kube-api-access-fb7qh\") pod \"coredns-6f6b679f8f-4phld\" (UID: \"629e5c71-32da-4b48-84fc-84b75a533e6f\") " pod="kube-system/coredns-6f6b679f8f-4phld" Jan 29 11:48:24.704290 kubelet[2461]: I0129 11:48:24.704033 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khp5l\" (UniqueName: \"kubernetes.io/projected/fb12d92a-c125-4415-87b8-9e921473eb39-kube-api-access-khp5l\") pod \"coredns-6f6b679f8f-6sd9d\" (UID: \"fb12d92a-c125-4415-87b8-9e921473eb39\") " pod="kube-system/coredns-6f6b679f8f-6sd9d" Jan 29 11:48:24.704290 kubelet[2461]: I0129 11:48:24.704049 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb12d92a-c125-4415-87b8-9e921473eb39-config-volume\") pod \"coredns-6f6b679f8f-6sd9d\" (UID: \"fb12d92a-c125-4415-87b8-9e921473eb39\") " pod="kube-system/coredns-6f6b679f8f-6sd9d" Jan 29 11:48:24.981766 kubelet[2461]: E0129 11:48:24.981665 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:24.983073 containerd[1439]: time="2025-01-29T11:48:24.983034742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4phld,Uid:629e5c71-32da-4b48-84fc-84b75a533e6f,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:24.996374 kubelet[2461]: E0129 11:48:24.996268 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:24.997057 containerd[1439]: time="2025-01-29T11:48:24.996898470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6sd9d,Uid:fb12d92a-c125-4415-87b8-9e921473eb39,Namespace:kube-system,Attempt:0,}" Jan 29 11:48:25.384536 kubelet[2461]: E0129 11:48:25.384241 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:26.385461 kubelet[2461]: E0129 11:48:26.385357 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:26.718081 systemd-networkd[1383]: cilium_host: Link UP Jan 29 11:48:26.718198 systemd-networkd[1383]: cilium_net: Link UP Jan 29 11:48:26.719887 systemd-networkd[1383]: cilium_net: Gained carrier Jan 29 11:48:26.720130 systemd-networkd[1383]: cilium_host: Gained carrier Jan 29 11:48:26.720372 systemd-networkd[1383]: cilium_net: Gained IPv6LL Jan 29 11:48:26.721024 systemd-networkd[1383]: cilium_host: Gained IPv6LL Jan 29 11:48:26.829631 systemd-networkd[1383]: cilium_vxlan: Link UP Jan 29 11:48:26.830173 systemd-networkd[1383]: cilium_vxlan: Gained carrier Jan 29 11:48:27.144603 kernel: NET: Registered PF_ALG protocol family Jan 29 11:48:27.386868 kubelet[2461]: E0129 11:48:27.386679 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:27.722629 systemd-networkd[1383]: lxc_health: Link UP Jan 29 11:48:27.734965 systemd-networkd[1383]: lxc_health: Gained carrier Jan 29 11:48:28.097887 systemd-networkd[1383]: lxc6031c01e5334: Link UP Jan 29 11:48:28.105593 systemd-networkd[1383]: lxca0a65726d096: Link UP Jan 29 11:48:28.120467 kernel: eth0: renamed from tmpd4799 Jan 29 11:48:28.128461 kernel: eth0: renamed from tmp66143 Jan 29 11:48:28.141457 systemd-networkd[1383]: lxca0a65726d096: Gained carrier Jan 29 11:48:28.143156 systemd-networkd[1383]: lxc6031c01e5334: Gained carrier Jan 29 11:48:28.491711 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Jan 29 11:48:29.375104 kubelet[2461]: E0129 11:48:29.374973 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:29.394215 kubelet[2461]: E0129 11:48:29.393998 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:29.403115 kubelet[2461]: I0129 11:48:29.403050 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8hwgp" podStartSLOduration=9.145376885 podStartE2EDuration="16.403034458s" podCreationTimestamp="2025-01-29 11:48:13 +0000 UTC" firstStartedPulling="2025-01-29 11:48:13.413630894 +0000 UTC m=+6.200591382" lastFinishedPulling="2025-01-29 11:48:20.671288467 +0000 UTC m=+13.458248955" observedRunningTime="2025-01-29 11:48:25.403096187 +0000 UTC m=+18.190056755" watchObservedRunningTime="2025-01-29 11:48:29.403034458 +0000 UTC m=+22.189994946" Jan 29 11:48:29.515670 systemd-networkd[1383]: lxca0a65726d096: Gained IPv6LL Jan 29 11:48:29.579559 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 29 11:48:30.156654 systemd-networkd[1383]: lxc6031c01e5334: Gained IPv6LL Jan 29 11:48:30.395871 kubelet[2461]: E0129 11:48:30.395690 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:31.678951 containerd[1439]: time="2025-01-29T11:48:31.678778822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:31.678951 containerd[1439]: time="2025-01-29T11:48:31.678849867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:31.678951 containerd[1439]: time="2025-01-29T11:48:31.678886150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:31.681459 containerd[1439]: time="2025-01-29T11:48:31.679791978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:31.684546 containerd[1439]: time="2025-01-29T11:48:31.683975732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:48:31.684546 containerd[1439]: time="2025-01-29T11:48:31.684035057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:48:31.684546 containerd[1439]: time="2025-01-29T11:48:31.684061579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:31.684546 containerd[1439]: time="2025-01-29T11:48:31.684138184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:48:31.708578 systemd[1]: Started cri-containerd-66143b18c1e24c6a6aa6ffc5f402a21451c0d400246b48c715878e85f5ce364a.scope - libcontainer container 66143b18c1e24c6a6aa6ffc5f402a21451c0d400246b48c715878e85f5ce364a. Jan 29 11:48:31.709720 systemd[1]: Started cri-containerd-d4799e2ad3431467928d2643f3972ee0265044ec5cd7c3621d15c38af2020b00.scope - libcontainer container d4799e2ad3431467928d2643f3972ee0265044ec5cd7c3621d15c38af2020b00. Jan 29 11:48:31.721011 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:48:31.726782 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:48:31.743759 containerd[1439]: time="2025-01-29T11:48:31.743642459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4phld,Uid:629e5c71-32da-4b48-84fc-84b75a533e6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"66143b18c1e24c6a6aa6ffc5f402a21451c0d400246b48c715878e85f5ce364a\"" Jan 29 11:48:31.744519 kubelet[2461]: E0129 11:48:31.744488 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:31.747984 containerd[1439]: time="2025-01-29T11:48:31.747952583Z" level=info msg="CreateContainer within sandbox \"66143b18c1e24c6a6aa6ffc5f402a21451c0d400246b48c715878e85f5ce364a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:48:31.748551 containerd[1439]: time="2025-01-29T11:48:31.748527786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6sd9d,Uid:fb12d92a-c125-4415-87b8-9e921473eb39,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4799e2ad3431467928d2643f3972ee0265044ec5cd7c3621d15c38af2020b00\"" Jan 29 11:48:31.749115 kubelet[2461]: E0129 11:48:31.749098 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:31.751270 containerd[1439]: time="2025-01-29T11:48:31.751152223Z" level=info msg="CreateContainer within sandbox \"d4799e2ad3431467928d2643f3972ee0265044ec5cd7c3621d15c38af2020b00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:48:31.768318 containerd[1439]: time="2025-01-29T11:48:31.768272071Z" level=info msg="CreateContainer within sandbox \"d4799e2ad3431467928d2643f3972ee0265044ec5cd7c3621d15c38af2020b00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17d3182a9842a7897b5302cae22fe739af77a862cc35aab06d82e0fa2ec81dcd\"" Jan 29 11:48:31.769172 containerd[1439]: time="2025-01-29T11:48:31.769139376Z" level=info msg="CreateContainer within sandbox \"66143b18c1e24c6a6aa6ffc5f402a21451c0d400246b48c715878e85f5ce364a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b130390057977860f0098a2cf11a7efb4026785a05fd2eae7daba4ea5b92181\"" Jan 29 11:48:31.769600 containerd[1439]: time="2025-01-29T11:48:31.769535886Z" level=info msg="StartContainer for \"17d3182a9842a7897b5302cae22fe739af77a862cc35aab06d82e0fa2ec81dcd\"" Jan 29 11:48:31.769657 containerd[1439]: time="2025-01-29T11:48:31.769550527Z" level=info msg="StartContainer for \"9b130390057977860f0098a2cf11a7efb4026785a05fd2eae7daba4ea5b92181\"" Jan 29 11:48:31.813564 systemd[1]: Started cri-containerd-17d3182a9842a7897b5302cae22fe739af77a862cc35aab06d82e0fa2ec81dcd.scope - libcontainer container 17d3182a9842a7897b5302cae22fe739af77a862cc35aab06d82e0fa2ec81dcd. Jan 29 11:48:31.815495 systemd[1]: Started cri-containerd-9b130390057977860f0098a2cf11a7efb4026785a05fd2eae7daba4ea5b92181.scope - libcontainer container 9b130390057977860f0098a2cf11a7efb4026785a05fd2eae7daba4ea5b92181. Jan 29 11:48:31.840478 containerd[1439]: time="2025-01-29T11:48:31.840437537Z" level=info msg="StartContainer for \"17d3182a9842a7897b5302cae22fe739af77a862cc35aab06d82e0fa2ec81dcd\" returns successfully" Jan 29 11:48:31.856456 containerd[1439]: time="2025-01-29T11:48:31.856166560Z" level=info msg="StartContainer for \"9b130390057977860f0098a2cf11a7efb4026785a05fd2eae7daba4ea5b92181\" returns successfully" Jan 29 11:48:32.402221 kubelet[2461]: E0129 11:48:32.402190 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:32.405013 kubelet[2461]: E0129 11:48:32.404961 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:32.424212 kubelet[2461]: I0129 11:48:32.423842 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4phld" podStartSLOduration=19.423818597 podStartE2EDuration="19.423818597s" podCreationTimestamp="2025-01-29 11:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:32.423218073 +0000 UTC m=+25.210178561" watchObservedRunningTime="2025-01-29 11:48:32.423818597 +0000 UTC m=+25.210779085" Jan 29 11:48:32.424212 kubelet[2461]: I0129 11:48:32.423959 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6sd9d" podStartSLOduration=19.423952566 podStartE2EDuration="19.423952566s" podCreationTimestamp="2025-01-29 11:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:48:32.413163146 +0000 UTC m=+25.200123634" watchObservedRunningTime="2025-01-29 11:48:32.423952566 +0000 UTC m=+25.210913054" Jan 29 11:48:33.406542 kubelet[2461]: E0129 11:48:33.406505 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:33.406892 kubelet[2461]: E0129 11:48:33.406633 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:34.408154 kubelet[2461]: E0129 11:48:34.408105 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:34.408541 kubelet[2461]: E0129 11:48:34.408403 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:48:35.993965 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:41092.service - OpenSSH per-connection server daemon (10.0.0.1:41092). Jan 29 11:48:36.028606 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 41092 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:36.030010 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:36.033937 systemd-logind[1425]: New session 8 of user core. Jan 29 11:48:36.041618 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:48:36.166527 sshd[3872]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:36.169773 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:41092.service: Deactivated successfully. Jan 29 11:48:36.171526 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:48:36.172111 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:48:36.172936 systemd-logind[1425]: Removed session 8. Jan 29 11:48:41.176960 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:41102.service - OpenSSH per-connection server daemon (10.0.0.1:41102). Jan 29 11:48:41.210461 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 41102 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:41.211271 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:41.214986 systemd-logind[1425]: New session 9 of user core. Jan 29 11:48:41.225560 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:48:41.333737 sshd[3889]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:41.337029 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:41102.service: Deactivated successfully. Jan 29 11:48:41.338749 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:48:41.340573 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:48:41.341384 systemd-logind[1425]: Removed session 9. Jan 29 11:48:46.344835 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:35766.service - OpenSSH per-connection server daemon (10.0.0.1:35766). Jan 29 11:48:46.378635 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 35766 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:46.379781 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:46.383671 systemd-logind[1425]: New session 10 of user core. Jan 29 11:48:46.389565 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:48:46.493021 sshd[3906]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:46.496196 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:35766.service: Deactivated successfully. Jan 29 11:48:46.497834 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:48:46.499179 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:48:46.499945 systemd-logind[1425]: Removed session 10. Jan 29 11:48:51.508472 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:35774.service - OpenSSH per-connection server daemon (10.0.0.1:35774). Jan 29 11:48:51.540805 sshd[3921]: Accepted publickey for core from 10.0.0.1 port 35774 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:51.542106 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:51.545988 systemd-logind[1425]: New session 11 of user core. Jan 29 11:48:51.552554 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:48:51.657823 sshd[3921]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:51.666822 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:35774.service: Deactivated successfully. Jan 29 11:48:51.668346 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:48:51.669681 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:48:51.670845 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Jan 29 11:48:51.672109 systemd-logind[1425]: Removed session 11. Jan 29 11:48:51.706320 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:51.707601 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:51.714057 systemd-logind[1425]: New session 12 of user core. Jan 29 11:48:51.719570 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:48:51.884470 sshd[3936]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:51.904578 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:35784.service: Deactivated successfully. Jan 29 11:48:51.906626 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:48:51.909087 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:48:51.915720 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:35792.service - OpenSSH per-connection server daemon (10.0.0.1:35792). Jan 29 11:48:51.916476 systemd-logind[1425]: Removed session 12. Jan 29 11:48:51.947273 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 35792 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:51.948733 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:51.953080 systemd-logind[1425]: New session 13 of user core. Jan 29 11:48:51.968594 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:48:52.077351 sshd[3949]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:52.080264 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:48:52.080563 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:35792.service: Deactivated successfully. Jan 29 11:48:52.082071 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:48:52.084662 systemd-logind[1425]: Removed session 13. Jan 29 11:48:57.088102 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:47130.service - OpenSSH per-connection server daemon (10.0.0.1:47130). Jan 29 11:48:57.121589 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 47130 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:48:57.122558 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:48:57.126274 systemd-logind[1425]: New session 14 of user core. Jan 29 11:48:57.148650 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:48:57.258105 sshd[3964]: pam_unix(sshd:session): session closed for user core Jan 29 11:48:57.261861 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:47130.service: Deactivated successfully. Jan 29 11:48:57.264049 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:48:57.264885 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:48:57.265643 systemd-logind[1425]: Removed session 14. Jan 29 11:49:02.269085 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:47146.service - OpenSSH per-connection server daemon (10.0.0.1:47146). Jan 29 11:49:02.302190 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 47146 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:02.303513 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:02.307281 systemd-logind[1425]: New session 15 of user core. Jan 29 11:49:02.326682 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:49:02.435820 sshd[3979]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:02.443544 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:47146.service: Deactivated successfully. Jan 29 11:49:02.445497 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:49:02.447084 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:49:02.453719 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:47150.service - OpenSSH per-connection server daemon (10.0.0.1:47150). Jan 29 11:49:02.455297 systemd-logind[1425]: Removed session 15. Jan 29 11:49:02.482792 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 47150 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:02.484111 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:02.487814 systemd-logind[1425]: New session 16 of user core. Jan 29 11:49:02.498587 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:49:02.737838 sshd[3993]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:02.748322 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:47150.service: Deactivated successfully. Jan 29 11:49:02.750042 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:49:02.751268 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:49:02.752762 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:60534.service - OpenSSH per-connection server daemon (10.0.0.1:60534). Jan 29 11:49:02.754330 systemd-logind[1425]: Removed session 16. Jan 29 11:49:02.790260 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 60534 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:02.791546 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:02.795278 systemd-logind[1425]: New session 17 of user core. Jan 29 11:49:02.804541 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:49:04.063642 sshd[4006]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:04.070536 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:60534.service: Deactivated successfully. Jan 29 11:49:04.072842 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:49:04.074351 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:49:04.084800 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:60542.service - OpenSSH per-connection server daemon (10.0.0.1:60542). Jan 29 11:49:04.087307 systemd-logind[1425]: Removed session 17. Jan 29 11:49:04.121322 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 60542 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:04.122767 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:04.127239 systemd-logind[1425]: New session 18 of user core. Jan 29 11:49:04.136568 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:49:04.358705 sshd[4026]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:04.365858 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:60542.service: Deactivated successfully. Jan 29 11:49:04.369097 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:49:04.372552 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:49:04.380665 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:60544.service - OpenSSH per-connection server daemon (10.0.0.1:60544). Jan 29 11:49:04.381978 systemd-logind[1425]: Removed session 18. Jan 29 11:49:04.408853 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 60544 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:04.410150 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:04.414501 systemd-logind[1425]: New session 19 of user core. Jan 29 11:49:04.423593 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:49:04.531342 sshd[4039]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:04.534396 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:60544.service: Deactivated successfully. Jan 29 11:49:04.536664 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:49:04.538566 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:49:04.539596 systemd-logind[1425]: Removed session 19. Jan 29 11:49:09.542936 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:60550.service - OpenSSH per-connection server daemon (10.0.0.1:60550). Jan 29 11:49:09.575168 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 60550 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:09.575891 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:09.579722 systemd-logind[1425]: New session 20 of user core. Jan 29 11:49:09.589550 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:49:09.692916 sshd[4058]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:09.696014 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:60550.service: Deactivated successfully. Jan 29 11:49:09.697604 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:49:09.699543 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:49:09.700328 systemd-logind[1425]: Removed session 20. Jan 29 11:49:14.702843 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:40664.service - OpenSSH per-connection server daemon (10.0.0.1:40664). Jan 29 11:49:14.735838 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 40664 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:14.737069 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:14.740461 systemd-logind[1425]: New session 21 of user core. Jan 29 11:49:14.751557 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:49:14.854279 sshd[4075]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:14.857260 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:40664.service: Deactivated successfully. Jan 29 11:49:14.859150 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:49:14.860985 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:49:14.862068 systemd-logind[1425]: Removed session 21. Jan 29 11:49:19.863919 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:40668.service - OpenSSH per-connection server daemon (10.0.0.1:40668). Jan 29 11:49:19.897092 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 40668 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:19.898306 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:19.902197 systemd-logind[1425]: New session 22 of user core. Jan 29 11:49:19.916560 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:49:20.020377 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:20.028793 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:40668.service: Deactivated successfully. Jan 29 11:49:20.031733 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:49:20.032909 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:49:20.034167 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:40674.service - OpenSSH per-connection server daemon (10.0.0.1:40674). Jan 29 11:49:20.034854 systemd-logind[1425]: Removed session 22. Jan 29 11:49:20.065447 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 40674 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:20.066598 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:20.070419 systemd-logind[1425]: New session 23 of user core. Jan 29 11:49:20.086619 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:49:21.313145 kubelet[2461]: E0129 11:49:21.313083 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:22.212219 containerd[1439]: time="2025-01-29T11:49:22.212161232Z" level=info msg="StopContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" with timeout 30 (s)" Jan 29 11:49:22.213853 containerd[1439]: time="2025-01-29T11:49:22.213442690Z" level=info msg="Stop container \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" with signal terminated" Jan 29 11:49:22.221948 systemd[1]: cri-containerd-5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d.scope: Deactivated successfully. Jan 29 11:49:22.242306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d-rootfs.mount: Deactivated successfully. Jan 29 11:49:22.255577 containerd[1439]: time="2025-01-29T11:49:22.254992669Z" level=info msg="shim disconnected" id=5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d namespace=k8s.io Jan 29 11:49:22.255577 containerd[1439]: time="2025-01-29T11:49:22.255578419Z" level=warning msg="cleaning up after shim disconnected" id=5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d namespace=k8s.io Jan 29 11:49:22.255815 containerd[1439]: time="2025-01-29T11:49:22.255591098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:22.257656 containerd[1439]: time="2025-01-29T11:49:22.257622544Z" level=info msg="StopContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" with timeout 2 (s)" Jan 29 11:49:22.258304 containerd[1439]: time="2025-01-29T11:49:22.258272653Z" level=info msg="Stop container \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" with signal terminated" Jan 29 11:49:22.264513 containerd[1439]: time="2025-01-29T11:49:22.263839639Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:49:22.264357 systemd-networkd[1383]: lxc_health: Link DOWN Jan 29 11:49:22.264361 systemd-networkd[1383]: lxc_health: Lost carrier Jan 29 11:49:22.292681 systemd[1]: cri-containerd-b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a.scope: Deactivated successfully. Jan 29 11:49:22.292979 systemd[1]: cri-containerd-b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a.scope: Consumed 6.477s CPU time. Jan 29 11:49:22.309118 containerd[1439]: time="2025-01-29T11:49:22.309074475Z" level=info msg="StopContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" returns successfully" Jan 29 11:49:22.309755 containerd[1439]: time="2025-01-29T11:49:22.309715344Z" level=info msg="StopPodSandbox for \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\"" Jan 29 11:49:22.309809 containerd[1439]: time="2025-01-29T11:49:22.309767943Z" level=info msg="Container to stop \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.313321 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b-shm.mount: Deactivated successfully. Jan 29 11:49:22.315339 systemd[1]: cri-containerd-f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b.scope: Deactivated successfully. Jan 29 11:49:22.323276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a-rootfs.mount: Deactivated successfully. Jan 29 11:49:22.328501 containerd[1439]: time="2025-01-29T11:49:22.328188432Z" level=info msg="shim disconnected" id=b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a namespace=k8s.io Jan 29 11:49:22.328630 containerd[1439]: time="2025-01-29T11:49:22.328502587Z" level=warning msg="cleaning up after shim disconnected" id=b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a namespace=k8s.io Jan 29 11:49:22.328630 containerd[1439]: time="2025-01-29T11:49:22.328514107Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:22.337453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b-rootfs.mount: Deactivated successfully. Jan 29 11:49:22.344599 containerd[1439]: time="2025-01-29T11:49:22.344095924Z" level=info msg="StopContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" returns successfully" Jan 29 11:49:22.344741 containerd[1439]: time="2025-01-29T11:49:22.344629835Z" level=info msg="shim disconnected" id=f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b namespace=k8s.io Jan 29 11:49:22.344741 containerd[1439]: time="2025-01-29T11:49:22.344668834Z" level=warning msg="cleaning up after shim disconnected" id=f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b namespace=k8s.io Jan 29 11:49:22.344741 containerd[1439]: time="2025-01-29T11:49:22.344676954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349738868Z" level=info msg="StopPodSandbox for \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\"" Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349878146Z" level=info msg="Container to stop \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349893306Z" level=info msg="Container to stop \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349902986Z" level=info msg="Container to stop \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349914345Z" level=info msg="Container to stop \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.351324 containerd[1439]: time="2025-01-29T11:49:22.349923585Z" level=info msg="Container to stop \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:49:22.358742 systemd[1]: cri-containerd-19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84.scope: Deactivated successfully. Jan 29 11:49:22.360155 containerd[1439]: time="2025-01-29T11:49:22.360125173Z" level=info msg="TearDown network for sandbox \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\" successfully" Jan 29 11:49:22.360155 containerd[1439]: time="2025-01-29T11:49:22.360152732Z" level=info msg="StopPodSandbox for \"f6e26935c2d4b5fe456c186e7df4499d5b6a9a5e2da23d25cb3a0d3fcd5d729b\" returns successfully" Jan 29 11:49:22.362913 kubelet[2461]: E0129 11:49:22.362477 2461 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:49:22.388339 containerd[1439]: time="2025-01-29T11:49:22.388276577Z" level=info msg="shim disconnected" id=19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84 namespace=k8s.io Jan 29 11:49:22.388339 containerd[1439]: time="2025-01-29T11:49:22.388330816Z" level=warning msg="cleaning up after shim disconnected" id=19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84 namespace=k8s.io Jan 29 11:49:22.388339 containerd[1439]: time="2025-01-29T11:49:22.388339536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:22.398377 containerd[1439]: time="2025-01-29T11:49:22.398322808Z" level=info msg="TearDown network for sandbox \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" successfully" Jan 29 11:49:22.398377 containerd[1439]: time="2025-01-29T11:49:22.398360247Z" level=info msg="StopPodSandbox for \"19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84\" returns successfully" Jan 29 11:49:22.455706 kubelet[2461]: I0129 11:49:22.455670 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cni-path\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455706 kubelet[2461]: I0129 11:49:22.455706 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-net\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455733 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f2e5199-d72b-4dfe-a23f-f7425f64524d-clustermesh-secrets\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455754 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bjgk\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-kube-api-access-9bjgk\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455772 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-lib-modules\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455788 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-cgroup\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455802 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-run\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.455900 kubelet[2461]: I0129 11:49:22.455824 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-config-path\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455863 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-etc-cni-netd\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455878 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hostproc\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455895 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hubble-tls\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455910 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-kernel\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455927 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daa9d09d-3698-4121-906a-62ee5d21bf1c-cilium-config-path\") pod \"daa9d09d-3698-4121-906a-62ee5d21bf1c\" (UID: \"daa9d09d-3698-4121-906a-62ee5d21bf1c\") " Jan 29 11:49:22.456106 kubelet[2461]: I0129 11:49:22.455949 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-xtables-lock\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456236 kubelet[2461]: I0129 11:49:22.455965 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-bpf-maps\") pod \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\" (UID: \"9f2e5199-d72b-4dfe-a23f-f7425f64524d\") " Jan 29 11:49:22.456236 kubelet[2461]: I0129 11:49:22.455982 2461 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/daa9d09d-3698-4121-906a-62ee5d21bf1c-kube-api-access-hp45s\") pod \"daa9d09d-3698-4121-906a-62ee5d21bf1c\" (UID: \"daa9d09d-3698-4121-906a-62ee5d21bf1c\") " Jan 29 11:49:22.458924 kubelet[2461]: I0129 11:49:22.458782 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cni-path" (OuterVolumeSpecName: "cni-path") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.460737 kubelet[2461]: I0129 11:49:22.459665 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.460737 kubelet[2461]: I0129 11:49:22.460512 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461531 kubelet[2461]: I0129 11:49:22.461504 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461734 kubelet[2461]: I0129 11:49:22.461629 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461734 kubelet[2461]: I0129 11:49:22.461655 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461734 kubelet[2461]: I0129 11:49:22.461675 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461734 kubelet[2461]: I0129 11:49:22.461690 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461734 kubelet[2461]: I0129 11:49:22.461704 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hostproc" (OuterVolumeSpecName: "hostproc") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.461870 kubelet[2461]: I0129 11:49:22.461718 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:49:22.462774 kubelet[2461]: I0129 11:49:22.462703 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa9d09d-3698-4121-906a-62ee5d21bf1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "daa9d09d-3698-4121-906a-62ee5d21bf1c" (UID: "daa9d09d-3698-4121-906a-62ee5d21bf1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:22.462824 kubelet[2461]: I0129 11:49:22.462799 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:49:22.463113 kubelet[2461]: I0129 11:49:22.463089 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-kube-api-access-9bjgk" (OuterVolumeSpecName: "kube-api-access-9bjgk") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "kube-api-access-9bjgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:22.463665 kubelet[2461]: I0129 11:49:22.463638 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:22.464314 kubelet[2461]: I0129 11:49:22.464289 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa9d09d-3698-4121-906a-62ee5d21bf1c-kube-api-access-hp45s" (OuterVolumeSpecName: "kube-api-access-hp45s") pod "daa9d09d-3698-4121-906a-62ee5d21bf1c" (UID: "daa9d09d-3698-4121-906a-62ee5d21bf1c"). InnerVolumeSpecName "kube-api-access-hp45s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:49:22.465736 kubelet[2461]: I0129 11:49:22.465708 2461 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2e5199-d72b-4dfe-a23f-f7425f64524d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9f2e5199-d72b-4dfe-a23f-f7425f64524d" (UID: "9f2e5199-d72b-4dfe-a23f-f7425f64524d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:49:22.504381 kubelet[2461]: I0129 11:49:22.504359 2461 scope.go:117] "RemoveContainer" containerID="5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d" Jan 29 11:49:22.506011 containerd[1439]: time="2025-01-29T11:49:22.505781273Z" level=info msg="RemoveContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\"" Jan 29 11:49:22.508076 systemd[1]: Removed slice kubepods-besteffort-poddaa9d09d_3698_4121_906a_62ee5d21bf1c.slice - libcontainer container kubepods-besteffort-poddaa9d09d_3698_4121_906a_62ee5d21bf1c.slice. Jan 29 11:49:22.510485 containerd[1439]: time="2025-01-29T11:49:22.510455074Z" level=info msg="RemoveContainer for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" returns successfully" Jan 29 11:49:22.510833 kubelet[2461]: I0129 11:49:22.510734 2461 scope.go:117] "RemoveContainer" containerID="5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d" Jan 29 11:49:22.511047 containerd[1439]: time="2025-01-29T11:49:22.510996185Z" level=error msg="ContainerStatus for \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\": not found" Jan 29 11:49:22.518609 kubelet[2461]: E0129 11:49:22.518569 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\": not found" containerID="5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d" Jan 29 11:49:22.518743 kubelet[2461]: I0129 11:49:22.518618 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d"} err="failed to get container status \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b28889da3ada586fcf3c94a0ad80b054d98b67992b83f5ac725551b29dfa08d\": not found" Jan 29 11:49:22.518743 kubelet[2461]: I0129 11:49:22.518728 2461 scope.go:117] "RemoveContainer" containerID="b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a" Jan 29 11:49:22.519711 containerd[1439]: time="2025-01-29T11:49:22.519657998Z" level=info msg="RemoveContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\"" Jan 29 11:49:22.522879 containerd[1439]: time="2025-01-29T11:49:22.521855641Z" level=info msg="RemoveContainer for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" returns successfully" Jan 29 11:49:22.522960 kubelet[2461]: I0129 11:49:22.522002 2461 scope.go:117] "RemoveContainer" containerID="e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d" Jan 29 11:49:22.523650 containerd[1439]: time="2025-01-29T11:49:22.523392535Z" level=info msg="RemoveContainer for \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\"" Jan 29 11:49:22.524374 systemd[1]: Removed slice kubepods-burstable-pod9f2e5199_d72b_4dfe_a23f_f7425f64524d.slice - libcontainer container kubepods-burstable-pod9f2e5199_d72b_4dfe_a23f_f7425f64524d.slice. Jan 29 11:49:22.524636 systemd[1]: kubepods-burstable-pod9f2e5199_d72b_4dfe_a23f_f7425f64524d.slice: Consumed 6.596s CPU time. Jan 29 11:49:22.527505 containerd[1439]: time="2025-01-29T11:49:22.527474906Z" level=info msg="RemoveContainer for \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\" returns successfully" Jan 29 11:49:22.527722 kubelet[2461]: I0129 11:49:22.527702 2461 scope.go:117] "RemoveContainer" containerID="5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080" Jan 29 11:49:22.533439 containerd[1439]: time="2025-01-29T11:49:22.533384447Z" level=info msg="RemoveContainer for \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\"" Jan 29 11:49:22.541772 containerd[1439]: time="2025-01-29T11:49:22.541671347Z" level=info msg="RemoveContainer for \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\" returns successfully" Jan 29 11:49:22.541908 kubelet[2461]: I0129 11:49:22.541860 2461 scope.go:117] "RemoveContainer" containerID="47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684" Jan 29 11:49:22.542950 containerd[1439]: time="2025-01-29T11:49:22.542888566Z" level=info msg="RemoveContainer for \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\"" Jan 29 11:49:22.545944 containerd[1439]: time="2025-01-29T11:49:22.545908035Z" level=info msg="RemoveContainer for \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\" returns successfully" Jan 29 11:49:22.546567 kubelet[2461]: I0129 11:49:22.546543 2461 scope.go:117] "RemoveContainer" containerID="dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b" Jan 29 11:49:22.548733 containerd[1439]: time="2025-01-29T11:49:22.548707388Z" level=info msg="RemoveContainer for \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\"" Jan 29 11:49:22.550683 containerd[1439]: time="2025-01-29T11:49:22.550656235Z" level=info msg="RemoveContainer for \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\" returns successfully" Jan 29 11:49:22.550839 kubelet[2461]: I0129 11:49:22.550813 2461 scope.go:117] "RemoveContainer" containerID="b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a" Jan 29 11:49:22.551017 containerd[1439]: time="2025-01-29T11:49:22.550984989Z" level=error msg="ContainerStatus for \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\": not found" Jan 29 11:49:22.551129 kubelet[2461]: E0129 11:49:22.551108 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\": not found" containerID="b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a" Jan 29 11:49:22.551162 kubelet[2461]: I0129 11:49:22.551139 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a"} err="failed to get container status \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1382dc098ca546ff682fc04d2293dcdec131dd3c53e1830822e36efc43c691a\": not found" Jan 29 11:49:22.551188 kubelet[2461]: I0129 11:49:22.551162 2461 scope.go:117] "RemoveContainer" containerID="e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d" Jan 29 11:49:22.551405 containerd[1439]: time="2025-01-29T11:49:22.551329303Z" level=error msg="ContainerStatus for \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\": not found" Jan 29 11:49:22.551496 kubelet[2461]: E0129 11:49:22.551468 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\": not found" containerID="e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d" Jan 29 11:49:22.551537 kubelet[2461]: I0129 11:49:22.551495 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d"} err="failed to get container status \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3dbee3d15a77f2b345e9dad7613bb85f7bb7b4c17449af8252bba756fc23f1d\": not found" Jan 29 11:49:22.551537 kubelet[2461]: I0129 11:49:22.551513 2461 scope.go:117] "RemoveContainer" containerID="5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080" Jan 29 11:49:22.551713 containerd[1439]: time="2025-01-29T11:49:22.551680978Z" level=error msg="ContainerStatus for \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\": not found" Jan 29 11:49:22.551802 kubelet[2461]: E0129 11:49:22.551786 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\": not found" containerID="5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080" Jan 29 11:49:22.551836 kubelet[2461]: I0129 11:49:22.551806 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080"} err="failed to get container status \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c511cdf6338a543d1f3c9ed3efebf145405a49347ae940f2c43bbb0275cf080\": not found" Jan 29 11:49:22.551865 kubelet[2461]: I0129 11:49:22.551836 2461 scope.go:117] "RemoveContainer" containerID="47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684" Jan 29 11:49:22.552061 containerd[1439]: time="2025-01-29T11:49:22.551990332Z" level=error msg="ContainerStatus for \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\": not found" Jan 29 11:49:22.552111 kubelet[2461]: E0129 11:49:22.552081 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\": not found" containerID="47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684" Jan 29 11:49:22.552111 kubelet[2461]: I0129 11:49:22.552099 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684"} err="failed to get container status \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\": rpc error: code = NotFound desc = an error occurred when try to find container \"47cf8a514f62c05c7a9886f0b3198ee4fb32cff190f851db39fb8a63c862f684\": not found" Jan 29 11:49:22.552157 kubelet[2461]: I0129 11:49:22.552112 2461 scope.go:117] "RemoveContainer" containerID="dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b" Jan 29 11:49:22.552381 containerd[1439]: time="2025-01-29T11:49:22.552324807Z" level=error msg="ContainerStatus for \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\": not found" Jan 29 11:49:22.552465 kubelet[2461]: E0129 11:49:22.552449 2461 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\": not found" containerID="dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b" Jan 29 11:49:22.552495 kubelet[2461]: I0129 11:49:22.552467 2461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b"} err="failed to get container status \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dedddc61c29e4030925345991f12a2efbe24602aad413a38f84d17341261b30b\": not found" Jan 29 11:49:22.556646 kubelet[2461]: I0129 11:49:22.556617 2461 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556646 kubelet[2461]: I0129 11:49:22.556643 2461 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hp45s\" (UniqueName: \"kubernetes.io/projected/daa9d09d-3698-4121-906a-62ee5d21bf1c-kube-api-access-hp45s\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556654 2461 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556662 2461 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556670 2461 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f2e5199-d72b-4dfe-a23f-f7425f64524d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556678 2461 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9bjgk\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-kube-api-access-9bjgk\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556685 2461 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556693 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556700 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556705 kubelet[2461]: I0129 11:49:22.556708 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f2e5199-d72b-4dfe-a23f-f7425f64524d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556715 2461 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556732 2461 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556740 2461 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556747 2461 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f2e5199-d72b-4dfe-a23f-f7425f64524d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556754 2461 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f2e5199-d72b-4dfe-a23f-f7425f64524d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:22.556876 kubelet[2461]: I0129 11:49:22.556763 2461 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/daa9d09d-3698-4121-906a-62ee5d21bf1c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:49:23.234453 systemd[1]: var-lib-kubelet-pods-daa9d09d\x2d3698\x2d4121\x2d906a\x2d62ee5d21bf1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhp45s.mount: Deactivated successfully. Jan 29 11:49:23.234858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84-rootfs.mount: Deactivated successfully. Jan 29 11:49:23.234932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19d76a57bfc76bf6d0c281479d0fc73b04bb22b26624496011ae500446e8ca84-shm.mount: Deactivated successfully. Jan 29 11:49:23.234990 systemd[1]: var-lib-kubelet-pods-9f2e5199\x2dd72b\x2d4dfe\x2da23f\x2df7425f64524d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9bjgk.mount: Deactivated successfully. Jan 29 11:49:23.235042 systemd[1]: var-lib-kubelet-pods-9f2e5199\x2dd72b\x2d4dfe\x2da23f\x2df7425f64524d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:49:23.235093 systemd[1]: var-lib-kubelet-pods-9f2e5199\x2dd72b\x2d4dfe\x2da23f\x2df7425f64524d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:49:23.315618 kubelet[2461]: I0129 11:49:23.314806 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" path="/var/lib/kubelet/pods/9f2e5199-d72b-4dfe-a23f-f7425f64524d/volumes" Jan 29 11:49:23.315618 kubelet[2461]: I0129 11:49:23.315352 2461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daa9d09d-3698-4121-906a-62ee5d21bf1c" path="/var/lib/kubelet/pods/daa9d09d-3698-4121-906a-62ee5d21bf1c/volumes" Jan 29 11:49:24.166086 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:24.174910 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:40674.service: Deactivated successfully. Jan 29 11:49:24.176613 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:49:24.176760 systemd[1]: session-23.scope: Consumed 1.457s CPU time. Jan 29 11:49:24.178606 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:49:24.184660 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:51326.service - OpenSSH per-connection server daemon (10.0.0.1:51326). Jan 29 11:49:24.185571 systemd-logind[1425]: Removed session 23. Jan 29 11:49:24.212487 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:24.213653 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:24.216926 systemd-logind[1425]: New session 24 of user core. Jan 29 11:49:24.224542 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:49:26.006037 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:26.013027 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:51326.service: Deactivated successfully. Jan 29 11:49:26.017827 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:49:26.018136 systemd[1]: session-24.scope: Consumed 1.712s CPU time. Jan 29 11:49:26.019029 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022489 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="apply-sysctl-overwrites" Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022520 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="clean-cilium-state" Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022528 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="mount-cgroup" Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022533 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="daa9d09d-3698-4121-906a-62ee5d21bf1c" containerName="cilium-operator" Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022539 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="mount-bpf-fs" Jan 29 11:49:26.022974 kubelet[2461]: E0129 11:49:26.022544 2461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="cilium-agent" Jan 29 11:49:26.022974 kubelet[2461]: I0129 11:49:26.022569 2461 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa9d09d-3698-4121-906a-62ee5d21bf1c" containerName="cilium-operator" Jan 29 11:49:26.022974 kubelet[2461]: I0129 11:49:26.022574 2461 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2e5199-d72b-4dfe-a23f-f7425f64524d" containerName="cilium-agent" Jan 29 11:49:26.031749 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:51330.service - OpenSSH per-connection server daemon (10.0.0.1:51330). Jan 29 11:49:26.038568 systemd-logind[1425]: Removed session 24. Jan 29 11:49:26.042264 systemd[1]: Created slice kubepods-burstable-podfb5f4447_5f8c_439a_a62e_4615044fdb83.slice - libcontainer container kubepods-burstable-podfb5f4447_5f8c_439a_a62e_4615044fdb83.slice. Jan 29 11:49:26.049556 kubelet[2461]: W0129 11:49:26.049512 2461 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 29 11:49:26.055427 kubelet[2461]: E0129 11:49:26.054996 2461 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 29 11:49:26.070348 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 51330 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:26.071702 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:26.075038 systemd-logind[1425]: New session 25 of user core. Jan 29 11:49:26.075849 kubelet[2461]: I0129 11:49:26.075797 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-cni-path\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075849 kubelet[2461]: I0129 11:49:26.075837 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb5f4447-5f8c-439a-a62e-4615044fdb83-cilium-ipsec-secrets\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075945 kubelet[2461]: I0129 11:49:26.075860 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqcg\" (UniqueName: \"kubernetes.io/projected/fb5f4447-5f8c-439a-a62e-4615044fdb83-kube-api-access-8rqcg\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075945 kubelet[2461]: I0129 11:49:26.075888 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb5f4447-5f8c-439a-a62e-4615044fdb83-cilium-config-path\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075945 kubelet[2461]: I0129 11:49:26.075905 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-cilium-run\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075945 kubelet[2461]: I0129 11:49:26.075920 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-bpf-maps\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.075945 kubelet[2461]: I0129 11:49:26.075937 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-xtables-lock\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076056 kubelet[2461]: I0129 11:49:26.075952 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-lib-modules\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076056 kubelet[2461]: I0129 11:49:26.076020 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb5f4447-5f8c-439a-a62e-4615044fdb83-clustermesh-secrets\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076098 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-host-proc-sys-kernel\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076173 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-hostproc\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076209 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-etc-cni-netd\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076225 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-host-proc-sys-net\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076247 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb5f4447-5f8c-439a-a62e-4615044fdb83-cilium-cgroup\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.076322 kubelet[2461]: I0129 11:49:26.076261 2461 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb5f4447-5f8c-439a-a62e-4615044fdb83-hubble-tls\") pod \"cilium-wqjsg\" (UID: \"fb5f4447-5f8c-439a-a62e-4615044fdb83\") " pod="kube-system/cilium-wqjsg" Jan 29 11:49:26.090564 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:49:26.139464 sshd[4281]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:26.151266 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:51330.service: Deactivated successfully. Jan 29 11:49:26.154102 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:49:26.155635 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:49:26.163709 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:51332.service - OpenSSH per-connection server daemon (10.0.0.1:51332). Jan 29 11:49:26.164898 systemd-logind[1425]: Removed session 25. Jan 29 11:49:26.200458 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 51332 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 11:49:26.201830 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:49:26.206024 systemd-logind[1425]: New session 26 of user core. Jan 29 11:49:26.220556 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:49:27.182381 kubelet[2461]: E0129 11:49:27.182335 2461 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 11:49:27.183130 kubelet[2461]: E0129 11:49:27.182405 2461 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wqjsg: failed to sync secret cache: timed out waiting for the condition Jan 29 11:49:27.183209 kubelet[2461]: E0129 11:49:27.183191 2461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb5f4447-5f8c-439a-a62e-4615044fdb83-hubble-tls podName:fb5f4447-5f8c-439a-a62e-4615044fdb83 nodeName:}" failed. No retries permitted until 2025-01-29 11:49:27.683168175 +0000 UTC m=+80.470128663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/fb5f4447-5f8c-439a-a62e-4615044fdb83-hubble-tls") pod "cilium-wqjsg" (UID: "fb5f4447-5f8c-439a-a62e-4615044fdb83") : failed to sync secret cache: timed out waiting for the condition Jan 29 11:49:27.363344 kubelet[2461]: E0129 11:49:27.363227 2461 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:49:27.847930 kubelet[2461]: E0129 11:49:27.847645 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:27.848175 containerd[1439]: time="2025-01-29T11:49:27.848119410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqjsg,Uid:fb5f4447-5f8c-439a-a62e-4615044fdb83,Namespace:kube-system,Attempt:0,}" Jan 29 11:49:27.874340 containerd[1439]: time="2025-01-29T11:49:27.874106319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:49:27.874340 containerd[1439]: time="2025-01-29T11:49:27.874159878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:49:27.874340 containerd[1439]: time="2025-01-29T11:49:27.874171238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:27.874340 containerd[1439]: time="2025-01-29T11:49:27.874240237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:49:27.893612 systemd[1]: Started cri-containerd-b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b.scope - libcontainer container b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b. Jan 29 11:49:27.924551 containerd[1439]: time="2025-01-29T11:49:27.924491795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqjsg,Uid:fb5f4447-5f8c-439a-a62e-4615044fdb83,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\"" Jan 29 11:49:27.925910 kubelet[2461]: E0129 11:49:27.925790 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:27.931982 containerd[1439]: time="2025-01-29T11:49:27.931887872Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:49:27.942719 containerd[1439]: time="2025-01-29T11:49:27.942677551Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012\"" Jan 29 11:49:27.943581 containerd[1439]: time="2025-01-29T11:49:27.943545381Z" level=info msg="StartContainer for \"9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012\"" Jan 29 11:49:27.983650 systemd[1]: Started cri-containerd-9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012.scope - libcontainer container 9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012. Jan 29 11:49:28.006532 containerd[1439]: time="2025-01-29T11:49:28.006484002Z" level=info msg="StartContainer for \"9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012\" returns successfully" Jan 29 11:49:28.012152 systemd[1]: cri-containerd-9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012.scope: Deactivated successfully. Jan 29 11:49:28.038164 containerd[1439]: time="2025-01-29T11:49:28.038107961Z" level=info msg="shim disconnected" id=9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012 namespace=k8s.io Jan 29 11:49:28.038164 containerd[1439]: time="2025-01-29T11:49:28.038160160Z" level=warning msg="cleaning up after shim disconnected" id=9462f16be6f4bbd015c0491a34f3697a29c6add44c7572ded2caa48b09381012 namespace=k8s.io Jan 29 11:49:28.038164 containerd[1439]: time="2025-01-29T11:49:28.038168920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:28.312653 kubelet[2461]: E0129 11:49:28.312539 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:28.528888 kubelet[2461]: E0129 11:49:28.528858 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:28.531662 containerd[1439]: time="2025-01-29T11:49:28.531524466Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:49:28.548440 containerd[1439]: time="2025-01-29T11:49:28.548294416Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce\"" Jan 29 11:49:28.549222 containerd[1439]: time="2025-01-29T11:49:28.548982249Z" level=info msg="StartContainer for \"651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce\"" Jan 29 11:49:28.576585 systemd[1]: Started cri-containerd-651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce.scope - libcontainer container 651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce. Jan 29 11:49:28.598381 containerd[1439]: time="2025-01-29T11:49:28.598335027Z" level=info msg="StartContainer for \"651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce\" returns successfully" Jan 29 11:49:28.606105 systemd[1]: cri-containerd-651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce.scope: Deactivated successfully. Jan 29 11:49:28.632237 containerd[1439]: time="2025-01-29T11:49:28.632178364Z" level=info msg="shim disconnected" id=651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce namespace=k8s.io Jan 29 11:49:28.632237 containerd[1439]: time="2025-01-29T11:49:28.632232443Z" level=warning msg="cleaning up after shim disconnected" id=651f97258e296596d4cd8f5dbdebb951859c80e5d39b3c3fa95225793788ffce namespace=k8s.io Jan 29 11:49:28.632237 containerd[1439]: time="2025-01-29T11:49:28.632241763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:28.691690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981208366.mount: Deactivated successfully. Jan 29 11:49:29.095423 kubelet[2461]: I0129 11:49:29.095352 2461 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:49:29Z","lastTransitionTime":"2025-01-29T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:49:29.532229 kubelet[2461]: E0129 11:49:29.531841 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:29.535794 containerd[1439]: time="2025-01-29T11:49:29.535751877Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:49:29.546052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537121415.mount: Deactivated successfully. Jan 29 11:49:29.550737 containerd[1439]: time="2025-01-29T11:49:29.550689780Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34\"" Jan 29 11:49:29.551407 containerd[1439]: time="2025-01-29T11:49:29.551366293Z" level=info msg="StartContainer for \"4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34\"" Jan 29 11:49:29.581585 systemd[1]: Started cri-containerd-4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34.scope - libcontainer container 4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34. Jan 29 11:49:29.604187 containerd[1439]: time="2025-01-29T11:49:29.604137170Z" level=info msg="StartContainer for \"4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34\" returns successfully" Jan 29 11:49:29.605010 systemd[1]: cri-containerd-4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34.scope: Deactivated successfully. Jan 29 11:49:29.625483 containerd[1439]: time="2025-01-29T11:49:29.625431335Z" level=info msg="shim disconnected" id=4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34 namespace=k8s.io Jan 29 11:49:29.625483 containerd[1439]: time="2025-01-29T11:49:29.625481855Z" level=warning msg="cleaning up after shim disconnected" id=4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34 namespace=k8s.io Jan 29 11:49:29.625483 containerd[1439]: time="2025-01-29T11:49:29.625489894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:29.691776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ed40044b7920daa7faaaa7fce3294d855ad14ddee6587ce133406f9c3a65d34-rootfs.mount: Deactivated successfully. Jan 29 11:49:30.536289 kubelet[2461]: E0129 11:49:30.536256 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:30.538591 containerd[1439]: time="2025-01-29T11:49:30.538135015Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:49:30.550582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921163407.mount: Deactivated successfully. Jan 29 11:49:30.553033 containerd[1439]: time="2025-01-29T11:49:30.552983134Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3\"" Jan 29 11:49:30.553716 containerd[1439]: time="2025-01-29T11:49:30.553678008Z" level=info msg="StartContainer for \"41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3\"" Jan 29 11:49:30.585578 systemd[1]: Started cri-containerd-41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3.scope - libcontainer container 41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3. Jan 29 11:49:30.605257 systemd[1]: cri-containerd-41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3.scope: Deactivated successfully. Jan 29 11:49:30.622956 containerd[1439]: time="2025-01-29T11:49:30.622906241Z" level=info msg="StartContainer for \"41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3\" returns successfully" Jan 29 11:49:30.635910 containerd[1439]: time="2025-01-29T11:49:30.626108295Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb5f4447_5f8c_439a_a62e_4615044fdb83.slice/cri-containerd-41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3.scope/memory.events\": no such file or directory" Jan 29 11:49:30.643043 containerd[1439]: time="2025-01-29T11:49:30.642901197Z" level=info msg="shim disconnected" id=41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3 namespace=k8s.io Jan 29 11:49:30.643043 containerd[1439]: time="2025-01-29T11:49:30.642962437Z" level=warning msg="cleaning up after shim disconnected" id=41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3 namespace=k8s.io Jan 29 11:49:30.643043 containerd[1439]: time="2025-01-29T11:49:30.642970997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:49:30.691884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41044b46e507dfbd4fca336be02442ab9a870cd360e6e99e295b401cf512a5f3-rootfs.mount: Deactivated successfully. Jan 29 11:49:31.540370 kubelet[2461]: E0129 11:49:31.540187 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:31.542770 containerd[1439]: time="2025-01-29T11:49:31.542127061Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:49:31.557942 containerd[1439]: time="2025-01-29T11:49:31.557894987Z" level=info msg="CreateContainer within sandbox \"b8c027de7c393e6e5e28332f8830db1d387539fc641f3893eed39fdb5b7d771b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2de8dfcf41b682771942e77b946200eb1e0cc3a017fde875da3e5e1a38eade1c\"" Jan 29 11:49:31.558768 containerd[1439]: time="2025-01-29T11:49:31.558700901Z" level=info msg="StartContainer for \"2de8dfcf41b682771942e77b946200eb1e0cc3a017fde875da3e5e1a38eade1c\"" Jan 29 11:49:31.601599 systemd[1]: Started cri-containerd-2de8dfcf41b682771942e77b946200eb1e0cc3a017fde875da3e5e1a38eade1c.scope - libcontainer container 2de8dfcf41b682771942e77b946200eb1e0cc3a017fde875da3e5e1a38eade1c. Jan 29 11:49:31.631479 containerd[1439]: time="2025-01-29T11:49:31.631432734Z" level=info msg="StartContainer for \"2de8dfcf41b682771942e77b946200eb1e0cc3a017fde875da3e5e1a38eade1c\" returns successfully" Jan 29 11:49:31.904430 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:49:32.544763 kubelet[2461]: E0129 11:49:32.544722 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:32.560339 kubelet[2461]: I0129 11:49:32.560274 2461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqjsg" podStartSLOduration=6.560262829 podStartE2EDuration="6.560262829s" podCreationTimestamp="2025-01-29 11:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:49:32.559236476 +0000 UTC m=+85.346196964" watchObservedRunningTime="2025-01-29 11:49:32.560262829 +0000 UTC m=+85.347223317" Jan 29 11:49:33.849392 kubelet[2461]: E0129 11:49:33.849331 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:34.616233 systemd-networkd[1383]: lxc_health: Link UP Jan 29 11:49:34.628098 systemd-networkd[1383]: lxc_health: Gained carrier Jan 29 11:49:35.853712 kubelet[2461]: E0129 11:49:35.853663 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:35.883674 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 29 11:49:36.551561 kubelet[2461]: E0129 11:49:36.551521 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:36.726836 kubelet[2461]: E0129 11:49:36.726788 2461 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39334->127.0.0.1:45335: write tcp 127.0.0.1:39334->127.0.0.1:45335: write: broken pipe Jan 29 11:49:37.553406 kubelet[2461]: E0129 11:49:37.553335 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:39.312960 kubelet[2461]: E0129 11:49:39.312927 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:49:40.935046 sshd[4289]: pam_unix(sshd:session): session closed for user core Jan 29 11:49:40.938491 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:51332.service: Deactivated successfully. Jan 29 11:49:40.940249 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:49:40.941698 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:49:40.942512 systemd-logind[1425]: Removed session 26. Jan 29 11:49:41.313246 kubelet[2461]: E0129 11:49:41.312869 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"