Jan 17 11:55:21.926131 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 11:55:21.926152 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 11:55:21.926162 kernel: KASLR enabled Jan 17 11:55:21.926168 kernel: efi: EFI v2.7 by EDK II Jan 17 11:55:21.926173 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 11:55:21.926179 kernel: random: crng init done Jan 17 11:55:21.926186 kernel: ACPI: Early table checksum verification disabled Jan 17 11:55:21.926192 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 11:55:21.926198 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 11:55:21.926206 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926217 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926223 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926229 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926235 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926242 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926250 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926257 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926263 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 11:55:21.926269 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 11:55:21.926275 kernel: NUMA: Failed to initialise from firmware Jan 17 11:55:21.926282 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:55:21.926288 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 11:55:21.926294 kernel: Zone ranges: Jan 17 11:55:21.926307 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:55:21.926314 kernel: DMA32 empty Jan 17 11:55:21.926322 kernel: Normal empty Jan 17 11:55:21.926328 kernel: Movable zone start for each node Jan 17 11:55:21.926334 kernel: Early memory node ranges Jan 17 11:55:21.926340 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 11:55:21.926347 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 11:55:21.926353 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 11:55:21.926359 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 11:55:21.926365 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 11:55:21.926372 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 11:55:21.926378 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 11:55:21.926384 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 11:55:21.926390 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 11:55:21.926398 kernel: psci: probing for conduit method from ACPI. Jan 17 11:55:21.926404 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 11:55:21.926411 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 11:55:21.926419 kernel: psci: Trusted OS migration not required Jan 17 11:55:21.926426 kernel: psci: SMC Calling Convention v1.1 Jan 17 11:55:21.926433 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 11:55:21.926441 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 11:55:21.926448 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 11:55:21.926454 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 11:55:21.926461 kernel: Detected PIPT I-cache on CPU0 Jan 17 11:55:21.926468 kernel: CPU features: detected: GIC system register CPU interface Jan 17 11:55:21.926475 kernel: CPU features: detected: Hardware dirty bit management Jan 17 11:55:21.926481 kernel: CPU features: detected: Spectre-v4 Jan 17 11:55:21.926488 kernel: CPU features: detected: Spectre-BHB Jan 17 11:55:21.926494 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 11:55:21.926501 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 11:55:21.926509 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 11:55:21.926515 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 11:55:21.926522 kernel: alternatives: applying boot alternatives Jan 17 11:55:21.926530 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:55:21.926537 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 11:55:21.926543 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 11:55:21.926550 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 11:55:21.926557 kernel: Fallback order for Node 0: 0 Jan 17 11:55:21.926563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 11:55:21.926570 kernel: Policy zone: DMA Jan 17 11:55:21.926577 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 11:55:21.926584 kernel: software IO TLB: area num 4. Jan 17 11:55:21.926591 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 11:55:21.926598 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 11:55:21.926607 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 11:55:21.926614 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 11:55:21.926628 kernel: rcu: RCU event tracing is enabled. Jan 17 11:55:21.926638 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 11:55:21.926647 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 11:55:21.926656 kernel: Tracing variant of Tasks RCU enabled. Jan 17 11:55:21.926663 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 11:55:21.926669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 11:55:21.926676 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 11:55:21.926684 kernel: GICv3: 256 SPIs implemented Jan 17 11:55:21.926691 kernel: GICv3: 0 Extended SPIs implemented Jan 17 11:55:21.926698 kernel: Root IRQ handler: gic_handle_irq Jan 17 11:55:21.926705 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 11:55:21.926712 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 11:55:21.926719 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 11:55:21.926725 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 11:55:21.926732 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 11:55:21.926739 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 11:55:21.926746 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 11:55:21.926753 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 11:55:21.926761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:55:21.926768 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 11:55:21.926775 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 11:55:21.926781 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 11:55:21.926788 kernel: arm-pv: using stolen time PV Jan 17 11:55:21.926795 kernel: Console: colour dummy device 80x25 Jan 17 11:55:21.926802 kernel: ACPI: Core revision 20230628 Jan 17 11:55:21.926809 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 11:55:21.926816 kernel: pid_max: default: 32768 minimum: 301 Jan 17 11:55:21.926823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 11:55:21.926831 kernel: landlock: Up and running. Jan 17 11:55:21.926837 kernel: SELinux: Initializing. Jan 17 11:55:21.926844 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:55:21.926851 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 11:55:21.926858 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:55:21.926865 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 11:55:21.926872 kernel: rcu: Hierarchical SRCU implementation. Jan 17 11:55:21.926879 kernel: rcu: Max phase no-delay instances is 400. Jan 17 11:55:21.926886 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 11:55:21.926894 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 11:55:21.926901 kernel: Remapping and enabling EFI services. Jan 17 11:55:21.926907 kernel: smp: Bringing up secondary CPUs ... Jan 17 11:55:21.926914 kernel: Detected PIPT I-cache on CPU1 Jan 17 11:55:21.926921 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 11:55:21.926928 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 11:55:21.926935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:55:21.926941 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 11:55:21.926948 kernel: Detected PIPT I-cache on CPU2 Jan 17 11:55:21.926955 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 11:55:21.926963 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 11:55:21.926970 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:55:21.926981 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 11:55:21.926990 kernel: Detected PIPT I-cache on CPU3 Jan 17 11:55:21.926997 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 11:55:21.927005 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 11:55:21.927012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 11:55:21.927019 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 11:55:21.927026 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 11:55:21.927035 kernel: SMP: Total of 4 processors activated. Jan 17 11:55:21.927042 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 11:55:21.927049 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 11:55:21.927057 kernel: CPU features: detected: Common not Private translations Jan 17 11:55:21.927064 kernel: CPU features: detected: CRC32 instructions Jan 17 11:55:21.927071 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 11:55:21.927078 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 11:55:21.927085 kernel: CPU features: detected: LSE atomic instructions Jan 17 11:55:21.927094 kernel: CPU features: detected: Privileged Access Never Jan 17 11:55:21.927101 kernel: CPU features: detected: RAS Extension Support Jan 17 11:55:21.927108 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 11:55:21.927115 kernel: CPU: All CPU(s) started at EL1 Jan 17 11:55:21.927122 kernel: alternatives: applying system-wide alternatives Jan 17 11:55:21.927129 kernel: devtmpfs: initialized Jan 17 11:55:21.927137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 11:55:21.927144 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 11:55:21.927151 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 11:55:21.927160 kernel: SMBIOS 3.0.0 present. Jan 17 11:55:21.927167 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 11:55:21.927174 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 11:55:21.927181 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 11:55:21.927188 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 11:55:21.927196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 11:55:21.927203 kernel: audit: initializing netlink subsys (disabled) Jan 17 11:55:21.927210 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 17 11:55:21.927217 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 11:55:21.927226 kernel: cpuidle: using governor menu Jan 17 11:55:21.927233 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 11:55:21.927240 kernel: ASID allocator initialised with 32768 entries Jan 17 11:55:21.927248 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 11:55:21.927255 kernel: Serial: AMBA PL011 UART driver Jan 17 11:55:21.927262 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 11:55:21.927269 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 11:55:21.927276 kernel: Modules: 509040 pages in range for PLT usage Jan 17 11:55:21.927307 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 11:55:21.927317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 11:55:21.927325 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 11:55:21.927332 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 11:55:21.927339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 11:55:21.927346 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 11:55:21.927354 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 11:55:21.927361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 11:55:21.927368 kernel: ACPI: Added _OSI(Module Device) Jan 17 11:55:21.927375 kernel: ACPI: Added _OSI(Processor Device) Jan 17 11:55:21.927383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 11:55:21.927391 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 11:55:21.927398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 11:55:21.927405 kernel: ACPI: Interpreter enabled Jan 17 11:55:21.927412 kernel: ACPI: Using GIC for interrupt routing Jan 17 11:55:21.927419 kernel: ACPI: MCFG table detected, 1 entries Jan 17 11:55:21.927426 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 11:55:21.927434 kernel: printk: console [ttyAMA0] enabled Jan 17 11:55:21.927441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 11:55:21.927574 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 11:55:21.927675 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 11:55:21.927743 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 11:55:21.927807 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 11:55:21.927869 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 11:55:21.927879 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 11:55:21.927886 kernel: PCI host bridge to bus 0000:00 Jan 17 11:55:21.927960 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 11:55:21.928018 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 11:55:21.928076 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 11:55:21.928133 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 11:55:21.928214 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 11:55:21.928295 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 11:55:21.928380 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 11:55:21.928448 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 11:55:21.928513 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:55:21.928580 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 11:55:21.928761 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 11:55:21.928832 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 11:55:21.928890 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 11:55:21.928950 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 11:55:21.929006 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 11:55:21.929015 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 11:55:21.929023 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 11:55:21.929030 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 11:55:21.929037 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 11:55:21.929045 kernel: iommu: Default domain type: Translated Jan 17 11:55:21.929052 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 11:55:21.929061 kernel: efivars: Registered efivars operations Jan 17 11:55:21.929068 kernel: vgaarb: loaded Jan 17 11:55:21.929076 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 11:55:21.929083 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 11:55:21.929090 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 11:55:21.929097 kernel: pnp: PnP ACPI init Jan 17 11:55:21.929167 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 11:55:21.929178 kernel: pnp: PnP ACPI: found 1 devices Jan 17 11:55:21.929186 kernel: NET: Registered PF_INET protocol family Jan 17 11:55:21.929195 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 11:55:21.929203 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 11:55:21.929210 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 11:55:21.929217 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 11:55:21.929225 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 11:55:21.929232 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 11:55:21.929239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:55:21.929247 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 11:55:21.929254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 11:55:21.929263 kernel: PCI: CLS 0 bytes, default 64 Jan 17 11:55:21.929270 kernel: kvm [1]: HYP mode not available Jan 17 11:55:21.929277 kernel: Initialise system trusted keyrings Jan 17 11:55:21.929284 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 11:55:21.929291 kernel: Key type asymmetric registered Jan 17 11:55:21.929299 kernel: Asymmetric key parser 'x509' registered Jan 17 11:55:21.929313 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 11:55:21.929321 kernel: io scheduler mq-deadline registered Jan 17 11:55:21.929328 kernel: io scheduler kyber registered Jan 17 11:55:21.929337 kernel: io scheduler bfq registered Jan 17 11:55:21.929345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 11:55:21.929352 kernel: ACPI: button: Power Button [PWRB] Jan 17 11:55:21.929360 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 11:55:21.929430 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 11:55:21.929440 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 11:55:21.929447 kernel: thunder_xcv, ver 1.0 Jan 17 11:55:21.929454 kernel: thunder_bgx, ver 1.0 Jan 17 11:55:21.929467 kernel: nicpf, ver 1.0 Jan 17 11:55:21.929476 kernel: nicvf, ver 1.0 Jan 17 11:55:21.929549 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 11:55:21.929612 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T11:55:21 UTC (1737114921) Jan 17 11:55:21.929650 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 11:55:21.929658 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 11:55:21.929665 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 11:55:21.929673 kernel: watchdog: Hard watchdog permanently disabled Jan 17 11:55:21.929680 kernel: NET: Registered PF_INET6 protocol family Jan 17 11:55:21.929690 kernel: Segment Routing with IPv6 Jan 17 11:55:21.929697 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 11:55:21.929705 kernel: NET: Registered PF_PACKET protocol family Jan 17 11:55:21.929712 kernel: Key type dns_resolver registered Jan 17 11:55:21.929719 kernel: registered taskstats version 1 Jan 17 11:55:21.929726 kernel: Loading compiled-in X.509 certificates Jan 17 11:55:21.929733 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 11:55:21.929740 kernel: Key type .fscrypt registered Jan 17 11:55:21.929747 kernel: Key type fscrypt-provisioning registered Jan 17 11:55:21.929756 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 11:55:21.929764 kernel: ima: Allocated hash algorithm: sha1 Jan 17 11:55:21.929771 kernel: ima: No architecture policies found Jan 17 11:55:21.929778 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 11:55:21.929785 kernel: clk: Disabling unused clocks Jan 17 11:55:21.929792 kernel: Freeing unused kernel memory: 39360K Jan 17 11:55:21.929799 kernel: Run /init as init process Jan 17 11:55:21.929807 kernel: with arguments: Jan 17 11:55:21.929814 kernel: /init Jan 17 11:55:21.929823 kernel: with environment: Jan 17 11:55:21.929830 kernel: HOME=/ Jan 17 11:55:21.929837 kernel: TERM=linux Jan 17 11:55:21.929844 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 11:55:21.929853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:55:21.929862 systemd[1]: Detected virtualization kvm. Jan 17 11:55:21.929870 systemd[1]: Detected architecture arm64. Jan 17 11:55:21.929879 systemd[1]: Running in initrd. Jan 17 11:55:21.929886 systemd[1]: No hostname configured, using default hostname. Jan 17 11:55:21.929894 systemd[1]: Hostname set to . Jan 17 11:55:21.929902 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:55:21.929909 systemd[1]: Queued start job for default target initrd.target. Jan 17 11:55:21.929917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:55:21.929925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:55:21.929933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 11:55:21.929943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:55:21.929951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 11:55:21.929959 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 11:55:21.929968 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 11:55:21.929977 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 11:55:21.929984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:55:21.929992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:55:21.930002 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:55:21.930023 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:55:21.930031 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:55:21.930038 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:55:21.930046 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:55:21.930054 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:55:21.930062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 11:55:21.930069 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 11:55:21.930077 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:55:21.930087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:55:21.930095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:55:21.930102 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:55:21.930110 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 11:55:21.930118 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:55:21.930126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 11:55:21.930134 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 11:55:21.930141 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:55:21.930151 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:55:21.930159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:55:21.930166 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 11:55:21.930174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:55:21.930182 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 11:55:21.930211 systemd-journald[237]: Collecting audit messages is disabled. Jan 17 11:55:21.930232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:55:21.930241 systemd-journald[237]: Journal started Jan 17 11:55:21.930260 systemd-journald[237]: Runtime Journal (/run/log/journal/f0739f62cded485b9828ed0275837748) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:55:21.933394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:55:21.922507 systemd-modules-load[239]: Inserted module 'overlay' Jan 17 11:55:21.937158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 11:55:21.938660 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:55:21.938694 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 11:55:21.941668 kernel: Bridge firewalling registered Jan 17 11:55:21.941904 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 17 11:55:21.942541 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 11:55:21.944255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:55:21.952795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:55:21.954512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:55:21.957826 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:55:21.960101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:55:21.963055 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 11:55:21.964231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:55:21.966774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:55:21.974517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:55:21.977357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:55:21.979398 dracut-cmdline[274]: dracut-dracut-053 Jan 17 11:55:21.980640 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 11:55:22.011466 systemd-resolved[287]: Positive Trust Anchors: Jan 17 11:55:22.011482 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:55:22.011514 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:55:22.016105 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 17 11:55:22.017060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:55:22.020930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:55:22.058650 kernel: SCSI subsystem initialized Jan 17 11:55:22.062631 kernel: Loading iSCSI transport class v2.0-870. Jan 17 11:55:22.074638 kernel: iscsi: registered transport (tcp) Jan 17 11:55:22.085658 kernel: iscsi: registered transport (qla4xxx) Jan 17 11:55:22.085700 kernel: QLogic iSCSI HBA Driver Jan 17 11:55:22.128550 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 11:55:22.134798 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 11:55:22.152096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 11:55:22.152155 kernel: device-mapper: uevent: version 1.0.3 Jan 17 11:55:22.153160 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 11:55:22.202679 kernel: raid6: neonx8 gen() 15785 MB/s Jan 17 11:55:22.219654 kernel: raid6: neonx4 gen() 15647 MB/s Jan 17 11:55:22.236648 kernel: raid6: neonx2 gen() 13183 MB/s Jan 17 11:55:22.253645 kernel: raid6: neonx1 gen() 10491 MB/s Jan 17 11:55:22.270655 kernel: raid6: int64x8 gen() 6950 MB/s Jan 17 11:55:22.287643 kernel: raid6: int64x4 gen() 7343 MB/s Jan 17 11:55:22.304643 kernel: raid6: int64x2 gen() 6124 MB/s Jan 17 11:55:22.321810 kernel: raid6: int64x1 gen() 5056 MB/s Jan 17 11:55:22.321825 kernel: raid6: using algorithm neonx8 gen() 15785 MB/s Jan 17 11:55:22.339748 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Jan 17 11:55:22.339764 kernel: raid6: using neon recovery algorithm Jan 17 11:55:22.346898 kernel: xor: measuring software checksum speed Jan 17 11:55:22.346923 kernel: 8regs : 19097 MB/sec Jan 17 11:55:22.347639 kernel: 32regs : 19660 MB/sec Jan 17 11:55:22.348795 kernel: arm64_neon : 23288 MB/sec Jan 17 11:55:22.348806 kernel: xor: using function: arm64_neon (23288 MB/sec) Jan 17 11:55:22.398644 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 11:55:22.411506 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:55:22.423833 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:55:22.435657 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 17 11:55:22.438843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:55:22.446783 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 11:55:22.458467 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 17 11:55:22.485763 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:55:22.504817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:55:22.543955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:55:22.553975 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 11:55:22.567482 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 11:55:22.569169 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:55:22.571065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:55:22.573542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:55:22.582175 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 11:55:22.592331 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:55:22.598042 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 11:55:22.605106 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 11:55:22.605209 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 11:55:22.605220 kernel: GPT:9289727 != 19775487 Jan 17 11:55:22.605229 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 11:55:22.605238 kernel: GPT:9289727 != 19775487 Jan 17 11:55:22.605252 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 11:55:22.605261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:55:22.606031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:55:22.606740 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:55:22.610155 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:55:22.611254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:55:22.611404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:55:22.623826 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (521) Jan 17 11:55:22.623851 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (513) Jan 17 11:55:22.615910 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:55:22.630903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:55:22.637772 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 11:55:22.643999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:55:22.649351 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 11:55:22.659146 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 11:55:22.660437 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 11:55:22.666662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:55:22.676783 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 11:55:22.678539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 11:55:22.685412 disk-uuid[552]: Primary Header is updated. Jan 17 11:55:22.685412 disk-uuid[552]: Secondary Entries is updated. Jan 17 11:55:22.685412 disk-uuid[552]: Secondary Header is updated. Jan 17 11:55:22.689646 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:55:22.699612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:55:23.702635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 11:55:23.703152 disk-uuid[557]: The operation has completed successfully. Jan 17 11:55:23.721723 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 11:55:23.721816 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 11:55:23.746806 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 11:55:23.749707 sh[575]: Success Jan 17 11:55:23.762645 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 11:55:23.791546 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 11:55:23.804999 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 11:55:23.806558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 11:55:23.817653 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 11:55:23.817702 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:55:23.817712 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 11:55:23.820131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 11:55:23.820159 kernel: BTRFS info (device dm-0): using free space tree Jan 17 11:55:23.823982 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 11:55:23.825346 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 11:55:23.833829 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 11:55:23.836188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 11:55:23.843094 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:55:23.843137 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:55:23.843154 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:55:23.846683 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:55:23.853582 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 11:55:23.855654 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:55:23.862304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 11:55:23.869806 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 11:55:23.932553 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:55:23.953467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:55:23.969644 ignition[675]: Ignition 2.19.0 Jan 17 11:55:23.969654 ignition[675]: Stage: fetch-offline Jan 17 11:55:23.969689 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:23.969698 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:23.969856 ignition[675]: parsed url from cmdline: "" Jan 17 11:55:23.969859 ignition[675]: no config URL provided Jan 17 11:55:23.969863 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 11:55:23.969870 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 11:55:23.969893 ignition[675]: op(1): [started] loading QEMU firmware config module Jan 17 11:55:23.976087 systemd-networkd[767]: lo: Link UP Jan 17 11:55:23.969897 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 11:55:23.976091 systemd-networkd[767]: lo: Gained carrier Jan 17 11:55:23.977895 ignition[675]: op(1): [finished] loading QEMU firmware config module Jan 17 11:55:23.976824 systemd-networkd[767]: Enumeration completed Jan 17 11:55:23.976922 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:55:23.978508 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:55:23.978512 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:55:23.978560 systemd[1]: Reached target network.target - Network. Jan 17 11:55:23.979381 systemd-networkd[767]: eth0: Link UP Jan 17 11:55:23.979385 systemd-networkd[767]: eth0: Gained carrier Jan 17 11:55:23.979391 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:55:23.992676 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:55:24.024764 ignition[675]: parsing config with SHA512: 1412497031b2b878a90b193ef1ecb56995324626e8da0f564e406868eddd70232b2882ff2bb07c8d56272309952d4fe4eebed38508997ed77ed5a58e00474dae Jan 17 11:55:24.029680 unknown[675]: fetched base config from "system" Jan 17 11:55:24.029697 unknown[675]: fetched user config from "qemu" Jan 17 11:55:24.030501 ignition[675]: fetch-offline: fetch-offline passed Jan 17 11:55:24.030836 ignition[675]: Ignition finished successfully Jan 17 11:55:24.032243 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:55:24.033689 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 11:55:24.041814 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 11:55:24.051797 ignition[774]: Ignition 2.19.0 Jan 17 11:55:24.051809 ignition[774]: Stage: kargs Jan 17 11:55:24.051974 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:24.051984 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:24.052829 ignition[774]: kargs: kargs passed Jan 17 11:55:24.055670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 11:55:24.052874 ignition[774]: Ignition finished successfully Jan 17 11:55:24.057636 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 11:55:24.071692 ignition[781]: Ignition 2.19.0 Jan 17 11:55:24.071702 ignition[781]: Stage: disks Jan 17 11:55:24.071872 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:24.071881 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:24.074870 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 11:55:24.072761 ignition[781]: disks: disks passed Jan 17 11:55:24.076982 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 11:55:24.072811 ignition[781]: Ignition finished successfully Jan 17 11:55:24.078437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 11:55:24.080120 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:55:24.081998 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:55:24.083583 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:55:24.096903 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 11:55:24.109244 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 11:55:24.112985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 11:55:24.115220 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 11:55:24.161643 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 11:55:24.161606 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 11:55:24.162876 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 11:55:24.173710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:55:24.175415 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 11:55:24.176659 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 11:55:24.176699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 11:55:24.176722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:55:24.184673 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jan 17 11:55:24.183263 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 11:55:24.186233 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 11:55:24.190193 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:55:24.190226 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:55:24.190238 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:55:24.193654 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:55:24.194817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:55:24.228891 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 11:55:24.232848 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 17 11:55:24.236747 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 11:55:24.240645 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 11:55:24.312942 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 11:55:24.335716 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 11:55:24.337302 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 11:55:24.342635 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:55:24.357774 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 11:55:24.359634 ignition[912]: INFO : Ignition 2.19.0 Jan 17 11:55:24.359634 ignition[912]: INFO : Stage: mount Jan 17 11:55:24.359634 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:24.359634 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:24.364977 ignition[912]: INFO : mount: mount passed Jan 17 11:55:24.364977 ignition[912]: INFO : Ignition finished successfully Jan 17 11:55:24.362674 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 11:55:24.373766 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 11:55:24.816405 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 11:55:24.827768 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 11:55:24.834540 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Jan 17 11:55:24.834582 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 11:55:24.834593 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 11:55:24.836111 kernel: BTRFS info (device vda6): using free space tree Jan 17 11:55:24.838640 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 11:55:24.839334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 11:55:24.861354 ignition[942]: INFO : Ignition 2.19.0 Jan 17 11:55:24.861354 ignition[942]: INFO : Stage: files Jan 17 11:55:24.863060 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:24.863060 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:24.863060 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 17 11:55:24.866387 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 11:55:24.866387 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 11:55:24.869299 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 11:55:24.870624 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 11:55:24.870624 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 11:55:24.869781 unknown[942]: wrote ssh authorized keys file for user: core Jan 17 11:55:24.874253 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:55:24.874253 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 11:55:25.026878 systemd-networkd[767]: eth0: Gained IPv6LL Jan 17 11:55:25.052598 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 11:55:25.223962 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 11:55:25.223962 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 11:55:25.227769 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 11:55:25.601301 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 11:55:25.770284 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 11:55:25.772226 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 17 11:55:26.069209 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 11:55:26.621444 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 11:55:26.621444 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 11:55:26.624965 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 11:55:26.646822 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:55:26.650906 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 11:55:26.653495 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 11:55:26.653495 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 11:55:26.653495 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 11:55:26.653495 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:55:26.653495 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 11:55:26.653495 ignition[942]: INFO : files: files passed Jan 17 11:55:26.653495 ignition[942]: INFO : Ignition finished successfully Jan 17 11:55:26.656127 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 11:55:26.672789 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 11:55:26.674508 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 11:55:26.676709 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 11:55:26.676820 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 11:55:26.684204 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 11:55:26.685637 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:55:26.685637 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:55:26.688804 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 11:55:26.688071 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:55:26.690106 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 11:55:26.702812 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 11:55:26.720725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 11:55:26.720842 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 11:55:26.723016 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 11:55:26.724854 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 11:55:26.726719 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 11:55:26.737754 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 11:55:26.750176 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:55:26.752677 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 11:55:26.764039 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:55:26.765320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:55:26.767350 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 11:55:26.769383 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 11:55:26.769510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 11:55:26.772260 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 11:55:26.774307 systemd[1]: Stopped target basic.target - Basic System. Jan 17 11:55:26.775963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 11:55:26.777742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 11:55:26.780422 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 11:55:26.782416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 11:55:26.784291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 11:55:26.786263 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 11:55:26.788233 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 11:55:26.789985 systemd[1]: Stopped target swap.target - Swaps. Jan 17 11:55:26.791507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 11:55:26.791652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 11:55:26.793986 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:55:26.795957 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:55:26.797909 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 11:55:26.797989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:55:26.800315 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 11:55:26.800434 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 11:55:26.803502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 11:55:26.803632 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 11:55:26.805599 systemd[1]: Stopped target paths.target - Path Units. Jan 17 11:55:26.807190 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 11:55:26.812650 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:55:26.814002 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 11:55:26.816134 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 11:55:26.817726 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 11:55:26.817819 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 11:55:26.819441 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 11:55:26.819528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 11:55:26.821146 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 11:55:26.821260 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 11:55:26.823034 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 11:55:26.823132 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 11:55:26.834796 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 11:55:26.835721 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 11:55:26.835853 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:55:26.839039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 11:55:26.840450 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 11:55:26.840575 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:55:26.843722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 11:55:26.846661 ignition[997]: INFO : Ignition 2.19.0 Jan 17 11:55:26.846661 ignition[997]: INFO : Stage: umount Jan 17 11:55:26.846661 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 11:55:26.846661 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 11:55:26.843893 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 11:55:26.853330 ignition[997]: INFO : umount: umount passed Jan 17 11:55:26.853330 ignition[997]: INFO : Ignition finished successfully Jan 17 11:55:26.850118 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 11:55:26.851095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 11:55:26.853293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 11:55:26.854262 systemd[1]: Stopped target network.target - Network. Jan 17 11:55:26.856042 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 11:55:26.856115 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 11:55:26.857183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 11:55:26.857235 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 11:55:26.859920 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 11:55:26.860122 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 11:55:26.862518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 11:55:26.862572 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 11:55:26.863908 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 11:55:26.865676 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 11:55:26.868501 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 11:55:26.868584 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 11:55:26.871575 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 17 11:55:26.872776 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 11:55:26.872882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 11:55:26.875858 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 11:55:26.875968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 11:55:26.878296 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 11:55:26.878348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:55:26.889900 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 11:55:26.890994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 11:55:26.891067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 11:55:26.893118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 11:55:26.893165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:55:26.895016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 11:55:26.895065 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 11:55:26.896809 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 11:55:26.896852 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:55:26.899515 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:55:26.910725 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 11:55:26.910846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 11:55:26.918460 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 11:55:26.918606 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:55:26.921059 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 11:55:26.921098 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 11:55:26.922317 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 11:55:26.922352 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:55:26.924501 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 11:55:26.924553 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 11:55:26.927541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 11:55:26.927588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 11:55:26.930411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 11:55:26.930458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 11:55:26.950846 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 11:55:26.951933 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 11:55:26.951996 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:55:26.954131 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 11:55:26.954179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:55:26.956323 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 11:55:26.956413 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 11:55:26.958153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 11:55:26.958228 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 11:55:26.960644 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 11:55:26.961689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 11:55:26.961752 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 11:55:26.964238 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 11:55:26.973363 systemd[1]: Switching root. Jan 17 11:55:27.001756 systemd-journald[237]: Journal stopped Jan 17 11:55:27.868172 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 17 11:55:27.868221 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 11:55:27.868246 kernel: SELinux: policy capability open_perms=1 Jan 17 11:55:27.868257 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 11:55:27.868267 kernel: SELinux: policy capability always_check_network=0 Jan 17 11:55:27.868277 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 11:55:27.868286 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 11:55:27.868301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 11:55:27.868314 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 11:55:27.868324 kernel: audit: type=1403 audit(1737114927.291:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 11:55:27.868335 systemd[1]: Successfully loaded SELinux policy in 31.421ms. Jan 17 11:55:27.868351 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.510ms. Jan 17 11:55:27.868363 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 11:55:27.868374 systemd[1]: Detected virtualization kvm. Jan 17 11:55:27.868385 systemd[1]: Detected architecture arm64. Jan 17 11:55:27.868400 systemd[1]: Detected first boot. Jan 17 11:55:27.868411 systemd[1]: Initializing machine ID from VM UUID. Jan 17 11:55:27.868431 zram_generator::config[1042]: No configuration found. Jan 17 11:55:27.868443 systemd[1]: Populated /etc with preset unit settings. Jan 17 11:55:27.868458 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 11:55:27.868474 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 11:55:27.868494 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 11:55:27.868505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 11:55:27.868517 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 11:55:27.868528 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 11:55:27.868541 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 11:55:27.868552 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 11:55:27.868563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 11:55:27.868574 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 11:55:27.868585 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 11:55:27.868596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 11:55:27.868607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 11:55:27.868706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 11:55:27.868722 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 11:55:27.868737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 11:55:27.868748 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 11:55:27.868759 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 11:55:27.868770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 11:55:27.868781 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 11:55:27.868791 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 11:55:27.868801 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 11:55:27.868814 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 11:55:27.868824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 11:55:27.868837 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 11:55:27.868848 systemd[1]: Reached target slices.target - Slice Units. Jan 17 11:55:27.868858 systemd[1]: Reached target swap.target - Swaps. Jan 17 11:55:27.868869 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 11:55:27.868880 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 11:55:27.868890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 11:55:27.868902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 11:55:27.868912 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 11:55:27.868925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 11:55:27.868936 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 11:55:27.868946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 11:55:27.868957 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 11:55:27.868968 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 11:55:27.868979 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 11:55:27.868990 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 11:55:27.869002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 11:55:27.869014 systemd[1]: Reached target machines.target - Containers. Jan 17 11:55:27.869026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 11:55:27.869038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:55:27.869048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 11:55:27.869060 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 11:55:27.869074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:55:27.869085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:55:27.869096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:55:27.869107 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 11:55:27.869119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:55:27.869130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 11:55:27.869142 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 11:55:27.869152 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 11:55:27.869162 kernel: fuse: init (API version 7.39) Jan 17 11:55:27.869172 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 11:55:27.869184 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 11:55:27.869194 kernel: loop: module loaded Jan 17 11:55:27.869204 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 11:55:27.869217 kernel: ACPI: bus type drm_connector registered Jan 17 11:55:27.869227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 11:55:27.869247 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 11:55:27.869259 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 11:55:27.869270 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 11:55:27.869302 systemd-journald[1113]: Collecting audit messages is disabled. Jan 17 11:55:27.869329 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 11:55:27.869343 systemd[1]: Stopped verity-setup.service. Jan 17 11:55:27.869357 systemd-journald[1113]: Journal started Jan 17 11:55:27.869378 systemd-journald[1113]: Runtime Journal (/run/log/journal/f0739f62cded485b9828ed0275837748) is 5.9M, max 47.3M, 41.4M free. Jan 17 11:55:27.669599 systemd[1]: Queued start job for default target multi-user.target. Jan 17 11:55:27.683569 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 11:55:27.683922 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 11:55:27.874181 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 11:55:27.874852 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 11:55:27.876025 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 11:55:27.877365 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 11:55:27.878512 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 11:55:27.879864 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 11:55:27.881137 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 11:55:27.883661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 11:55:27.885158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 11:55:27.887997 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 11:55:27.888174 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 11:55:27.889593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:55:27.889780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:55:27.891152 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:55:27.891322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:55:27.892729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:55:27.892865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:55:27.894451 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 11:55:27.894609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 11:55:27.896042 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:55:27.896187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:55:27.897638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 11:55:27.899038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 11:55:27.900871 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 11:55:27.913155 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 11:55:27.922713 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 11:55:27.924872 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 11:55:27.926040 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 11:55:27.926081 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 11:55:27.928076 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 11:55:27.930383 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 11:55:27.932523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 11:55:27.933704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:55:27.934917 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 11:55:27.936850 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 11:55:27.938154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:55:27.941818 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 11:55:27.944136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:55:27.944441 systemd-journald[1113]: Time spent on flushing to /var/log/journal/f0739f62cded485b9828ed0275837748 is 29.867ms for 855 entries. Jan 17 11:55:27.944441 systemd-journald[1113]: System Journal (/var/log/journal/f0739f62cded485b9828ed0275837748) is 8.0M, max 195.6M, 187.6M free. Jan 17 11:55:27.996727 systemd-journald[1113]: Received client request to flush runtime journal. Jan 17 11:55:27.996781 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 11:55:27.996799 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 11:55:27.945095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:55:27.950811 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 11:55:27.954495 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 11:55:27.958664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 11:55:27.963085 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 11:55:27.964745 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 11:55:27.967024 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 11:55:27.970263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 11:55:27.974345 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 11:55:27.985659 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 11:55:27.991300 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 11:55:27.994137 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:55:28.000678 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 11:55:28.009561 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 11:55:28.010107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 11:55:28.010745 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 11:55:28.018382 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 11:55:28.022714 kernel: loop1: detected capacity change from 0 to 114328 Jan 17 11:55:28.031770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 11:55:28.047759 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 11:55:28.047776 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Jan 17 11:55:28.052459 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 11:55:28.054640 kernel: loop2: detected capacity change from 0 to 194096 Jan 17 11:55:28.098651 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 11:55:28.104676 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 11:55:28.109681 kernel: loop5: detected capacity change from 0 to 194096 Jan 17 11:55:28.114387 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 11:55:28.114830 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 17 11:55:28.119713 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 11:55:28.119731 systemd[1]: Reloading... Jan 17 11:55:28.190673 zram_generator::config[1202]: No configuration found. Jan 17 11:55:28.223705 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 11:55:28.288756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:55:28.327398 systemd[1]: Reloading finished in 207 ms. Jan 17 11:55:28.355523 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 11:55:28.357274 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 11:55:28.375773 systemd[1]: Starting ensure-sysext.service... Jan 17 11:55:28.377709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 11:55:28.391248 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jan 17 11:55:28.391262 systemd[1]: Reloading... Jan 17 11:55:28.402370 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 11:55:28.402765 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 11:55:28.403437 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 11:55:28.403677 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 11:55:28.403753 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 17 11:55:28.405876 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:55:28.405889 systemd-tmpfiles[1238]: Skipping /boot Jan 17 11:55:28.413079 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 11:55:28.413113 systemd-tmpfiles[1238]: Skipping /boot Jan 17 11:55:28.442657 zram_generator::config[1268]: No configuration found. Jan 17 11:55:28.523228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:55:28.561124 systemd[1]: Reloading finished in 169 ms. Jan 17 11:55:28.574066 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 11:55:28.587056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 11:55:28.594449 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:55:28.597563 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 11:55:28.600341 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 11:55:28.603972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 11:55:28.609970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 11:55:28.612172 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 11:55:28.615671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:55:28.617566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:55:28.621770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:55:28.625842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:55:28.627152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:55:28.631144 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 11:55:28.633119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:55:28.633420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:55:28.635127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:55:28.635341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:55:28.640949 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 11:55:28.641608 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jan 17 11:55:28.643176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:55:28.643334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:55:28.653148 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 11:55:28.655171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 11:55:28.669139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 11:55:28.672417 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 11:55:28.678172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 11:55:28.681956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 11:55:28.683788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 11:55:28.685970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 11:55:28.689496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 11:55:28.691075 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 11:55:28.695669 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 11:55:28.697262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 11:55:28.697387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 11:55:28.699340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 11:55:28.699669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 11:55:28.702108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 11:55:28.702252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 11:55:28.704511 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 11:55:28.714179 systemd[1]: Finished ensure-sysext.service. Jan 17 11:55:28.721639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1355) Jan 17 11:55:28.728476 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 11:55:28.728644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 11:55:28.734240 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 11:55:28.753266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 11:55:28.754344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 11:55:28.754406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 11:55:28.754894 augenrules[1374]: No rules Jan 17 11:55:28.758834 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 11:55:28.762749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 11:55:28.763234 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:55:28.768929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 11:55:28.780849 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 11:55:28.801428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 11:55:28.807245 systemd-resolved[1305]: Positive Trust Anchors: Jan 17 11:55:28.809005 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 11:55:28.809040 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 11:55:28.821191 systemd-resolved[1305]: Defaulting to hostname 'linux'. Jan 17 11:55:28.830518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 11:55:28.835865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 11:55:28.837430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 11:55:28.846775 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 11:55:28.857341 systemd-networkd[1373]: lo: Link UP Jan 17 11:55:28.857352 systemd-networkd[1373]: lo: Gained carrier Jan 17 11:55:28.857816 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 11:55:28.858015 systemd-networkd[1373]: Enumeration completed Jan 17 11:55:28.859005 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 11:55:28.860947 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 11:55:28.861290 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:55:28.861301 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 11:55:28.862356 systemd[1]: Reached target network.target - Network. Jan 17 11:55:28.863688 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 11:55:28.865941 systemd-networkd[1373]: eth0: Link UP Jan 17 11:55:28.865950 systemd-networkd[1373]: eth0: Gained carrier Jan 17 11:55:28.865963 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 11:55:28.869583 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 11:55:28.882537 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:55:28.889928 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 11:55:28.890511 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Jan 17 11:55:28.424496 systemd-resolved[1305]: Clock change detected. Flushing caches. Jan 17 11:55:28.430892 systemd-journald[1113]: Time jumped backwards, rotating. Jan 17 11:55:28.424547 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 11:55:28.424593 systemd-timesyncd[1379]: Initial clock synchronization to Fri 2025-01-17 11:55:28.424452 UTC. Jan 17 11:55:28.433316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 11:55:28.444239 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 11:55:28.445712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 11:55:28.446854 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 11:55:28.448019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 11:55:28.449289 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 11:55:28.450657 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 11:55:28.451803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 11:55:28.453050 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 11:55:28.454312 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 11:55:28.454347 systemd[1]: Reached target paths.target - Path Units. Jan 17 11:55:28.455238 systemd[1]: Reached target timers.target - Timer Units. Jan 17 11:55:28.456656 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 11:55:28.459018 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 11:55:28.468748 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 11:55:28.470991 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 11:55:28.472507 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 11:55:28.473700 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 11:55:28.474642 systemd[1]: Reached target basic.target - Basic System. Jan 17 11:55:28.475594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:55:28.475625 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 11:55:28.476483 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 11:55:28.478370 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 11:55:28.478445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 11:55:28.482128 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 11:55:28.485112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 11:55:28.486990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 11:55:28.488287 jq[1404]: false Jan 17 11:55:28.487934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 11:55:28.491059 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 11:55:28.495173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 11:55:28.498156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 11:55:28.505187 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 11:55:28.506985 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 11:55:28.507321 dbus-daemon[1403]: [system] SELinux support is enabled Jan 17 11:55:28.507365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 11:55:28.508615 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 11:55:28.510350 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 11:55:28.511815 extend-filesystems[1405]: Found loop3 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found loop4 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found loop5 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda1 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda2 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda3 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found usr Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda4 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda6 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda7 Jan 17 11:55:28.513579 extend-filesystems[1405]: Found vda9 Jan 17 11:55:28.513579 extend-filesystems[1405]: Checking size of /dev/vda9 Jan 17 11:55:28.513106 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 11:55:28.535466 extend-filesystems[1405]: Resized partition /dev/vda9 Jan 17 11:55:28.516975 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 11:55:28.521278 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 11:55:28.541162 jq[1420]: true Jan 17 11:55:28.521449 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 11:55:28.521713 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 11:55:28.521843 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 11:55:28.524302 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 11:55:28.524445 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 11:55:28.535197 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 11:55:28.535240 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 11:55:28.541894 jq[1428]: true Jan 17 11:55:28.542506 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 11:55:28.542546 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 11:55:28.548604 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) Jan 17 11:55:28.554948 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1340) Jan 17 11:55:28.554993 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 11:55:28.555660 update_engine[1418]: I20250117 11:55:28.555411 1418 main.cc:92] Flatcar Update Engine starting Jan 17 11:55:28.560134 tar[1425]: linux-arm64/helm Jan 17 11:55:28.560986 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 11:55:28.563827 systemd[1]: Started update-engine.service - Update Engine. Jan 17 11:55:28.565158 update_engine[1418]: I20250117 11:55:28.563886 1418 update_check_scheduler.cc:74] Next update check in 6m25s Jan 17 11:55:28.569060 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 11:55:28.580931 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 11:55:28.598308 systemd-logind[1413]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 11:55:28.600681 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 11:55:28.600681 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 11:55:28.600681 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 11:55:28.600659 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 11:55:28.606783 extend-filesystems[1405]: Resized filesystem in /dev/vda9 Jan 17 11:55:28.600857 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 11:55:28.601014 systemd-logind[1413]: New seat seat0. Jan 17 11:55:28.609217 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 11:55:28.617815 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Jan 17 11:55:28.620964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 11:55:28.623766 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 11:55:28.637003 locksmithd[1445]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 11:55:28.767232 containerd[1440]: time="2025-01-17T11:55:28.767107559Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 11:55:28.793399 containerd[1440]: time="2025-01-17T11:55:28.793351999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.794763 containerd[1440]: time="2025-01-17T11:55:28.794657599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:55:28.794763 containerd[1440]: time="2025-01-17T11:55:28.794691159Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 11:55:28.794763 containerd[1440]: time="2025-01-17T11:55:28.794706599Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 11:55:28.794862 containerd[1440]: time="2025-01-17T11:55:28.794833959Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 11:55:28.794862 containerd[1440]: time="2025-01-17T11:55:28.794850239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.794928 containerd[1440]: time="2025-01-17T11:55:28.794896479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:55:28.794955 containerd[1440]: time="2025-01-17T11:55:28.794932399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795096 containerd[1440]: time="2025-01-17T11:55:28.795075759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795096 containerd[1440]: time="2025-01-17T11:55:28.795094719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795140 containerd[1440]: time="2025-01-17T11:55:28.795107839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795140 containerd[1440]: time="2025-01-17T11:55:28.795117479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795204 containerd[1440]: time="2025-01-17T11:55:28.795188519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795385 containerd[1440]: time="2025-01-17T11:55:28.795365119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795480 containerd[1440]: time="2025-01-17T11:55:28.795461719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 11:55:28.795516 containerd[1440]: time="2025-01-17T11:55:28.795479919Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 11:55:28.795580 containerd[1440]: time="2025-01-17T11:55:28.795564199Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 11:55:28.795620 containerd[1440]: time="2025-01-17T11:55:28.795608759Z" level=info msg="metadata content store policy set" policy=shared Jan 17 11:55:28.799051 containerd[1440]: time="2025-01-17T11:55:28.799022719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 11:55:28.799180 containerd[1440]: time="2025-01-17T11:55:28.799065279Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 11:55:28.799563 containerd[1440]: time="2025-01-17T11:55:28.799237519Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 11:55:28.799563 containerd[1440]: time="2025-01-17T11:55:28.799272799Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 11:55:28.799563 containerd[1440]: time="2025-01-17T11:55:28.799287879Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 11:55:28.799563 containerd[1440]: time="2025-01-17T11:55:28.799420559Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 11:55:28.800000 containerd[1440]: time="2025-01-17T11:55:28.799977599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 11:55:28.800223 containerd[1440]: time="2025-01-17T11:55:28.800203399Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 11:55:28.800335 containerd[1440]: time="2025-01-17T11:55:28.800273999Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 11:55:28.800409 containerd[1440]: time="2025-01-17T11:55:28.800393839Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 11:55:28.800476 containerd[1440]: time="2025-01-17T11:55:28.800455279Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.800535 containerd[1440]: time="2025-01-17T11:55:28.800522959Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.800636 containerd[1440]: time="2025-01-17T11:55:28.800614679Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.800752 containerd[1440]: time="2025-01-17T11:55:28.800691199Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.801964 containerd[1440]: time="2025-01-17T11:55:28.800712559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.801964 containerd[1440]: time="2025-01-17T11:55:28.801882599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.801964 containerd[1440]: time="2025-01-17T11:55:28.801907799Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.801964 containerd[1440]: time="2025-01-17T11:55:28.801936879Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 11:55:28.802118 containerd[1440]: time="2025-01-17T11:55:28.802039279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802118 containerd[1440]: time="2025-01-17T11:55:28.802058999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802118 containerd[1440]: time="2025-01-17T11:55:28.802076359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802118 containerd[1440]: time="2025-01-17T11:55:28.802090199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802118 containerd[1440]: time="2025-01-17T11:55:28.802106879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802122999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802138119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802153919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802169839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802189399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802212 containerd[1440]: time="2025-01-17T11:55:28.802201879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802306 containerd[1440]: time="2025-01-17T11:55:28.802217079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802306 containerd[1440]: time="2025-01-17T11:55:28.802233199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802306 containerd[1440]: time="2025-01-17T11:55:28.802253759Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 11:55:28.802306 containerd[1440]: time="2025-01-17T11:55:28.802280519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802306 containerd[1440]: time="2025-01-17T11:55:28.802296359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802421 containerd[1440]: time="2025-01-17T11:55:28.802309919Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 11:55:28.802440 containerd[1440]: time="2025-01-17T11:55:28.802426479Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 11:55:28.802458 containerd[1440]: time="2025-01-17T11:55:28.802446479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 11:55:28.802478 containerd[1440]: time="2025-01-17T11:55:28.802457599Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 11:55:28.802495 containerd[1440]: time="2025-01-17T11:55:28.802473319Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 11:55:28.802495 containerd[1440]: time="2025-01-17T11:55:28.802486439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.802542 containerd[1440]: time="2025-01-17T11:55:28.802501439Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 11:55:28.802542 containerd[1440]: time="2025-01-17T11:55:28.802520959Z" level=info msg="NRI interface is disabled by configuration." Jan 17 11:55:28.802542 containerd[1440]: time="2025-01-17T11:55:28.802535839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 11:55:28.804833 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803173279Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803259879Z" level=info msg="Connect containerd service" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803290279Z" level=info msg="using legacy CRI server" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803297439Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803383079Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.803987359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804393879Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804429119Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804536519Z" level=info msg="Start subscribing containerd event" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804566479Z" level=info msg="Start recovering state" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804619079Z" level=info msg="Start event monitor" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804628879Z" level=info msg="Start snapshots syncer" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804636839Z" level=info msg="Start cni network conf syncer for default" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804644239Z" level=info msg="Start streaming server" Jan 17 11:55:28.806029 containerd[1440]: time="2025-01-17T11:55:28.804761999Z" level=info msg="containerd successfully booted in 0.039108s" Jan 17 11:55:28.927830 tar[1425]: linux-arm64/LICENSE Jan 17 11:55:28.928042 tar[1425]: linux-arm64/README.md Jan 17 11:55:28.945965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 11:55:29.122290 sshd_keygen[1422]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 11:55:29.140403 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 11:55:29.149186 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 11:55:29.154116 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 11:55:29.154990 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 11:55:29.157410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 11:55:29.169356 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 11:55:29.171961 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 11:55:29.174073 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 11:55:29.175339 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 11:55:29.615075 systemd-networkd[1373]: eth0: Gained IPv6LL Jan 17 11:55:29.617575 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 11:55:29.619487 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 11:55:29.635156 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 11:55:29.637536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:29.639625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 11:55:29.653792 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 11:55:29.654082 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 11:55:29.656333 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 11:55:29.658466 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 11:55:30.118451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:30.120055 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 11:55:30.122162 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:55:30.124973 systemd[1]: Startup finished in 578ms (kernel) + 5.589s (initrd) + 3.333s (userspace) = 9.501s. Jan 17 11:55:30.579333 kubelet[1517]: E0117 11:55:30.579230 1517 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:55:30.581742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:55:30.581882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:55:34.235544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 11:55:34.236640 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:49620.service - OpenSSH per-connection server daemon (10.0.0.1:49620). Jan 17 11:55:34.288148 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 49620 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:34.290111 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:34.297421 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 11:55:34.312173 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 11:55:34.314040 systemd-logind[1413]: New session 1 of user core. Jan 17 11:55:34.321150 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 11:55:34.323355 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 11:55:34.330036 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 11:55:34.412362 systemd[1535]: Queued start job for default target default.target. Jan 17 11:55:34.425825 systemd[1535]: Created slice app.slice - User Application Slice. Jan 17 11:55:34.425867 systemd[1535]: Reached target paths.target - Paths. Jan 17 11:55:34.425879 systemd[1535]: Reached target timers.target - Timers. Jan 17 11:55:34.427142 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 11:55:34.436966 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 11:55:34.437025 systemd[1535]: Reached target sockets.target - Sockets. Jan 17 11:55:34.437038 systemd[1535]: Reached target basic.target - Basic System. Jan 17 11:55:34.437073 systemd[1535]: Reached target default.target - Main User Target. Jan 17 11:55:34.437099 systemd[1535]: Startup finished in 102ms. Jan 17 11:55:34.437363 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 11:55:34.438740 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 11:55:34.494324 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:49636.service - OpenSSH per-connection server daemon (10.0.0.1:49636). Jan 17 11:55:34.532081 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 49636 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:34.533495 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:34.537930 systemd-logind[1413]: New session 2 of user core. Jan 17 11:55:34.549055 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 11:55:34.601182 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 17 11:55:34.615747 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:49636.service: Deactivated successfully. Jan 17 11:55:34.617444 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 11:55:34.618756 systemd-logind[1413]: Session 2 logged out. Waiting for processes to exit. Jan 17 11:55:34.620183 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:49646.service - OpenSSH per-connection server daemon (10.0.0.1:49646). Jan 17 11:55:34.620969 systemd-logind[1413]: Removed session 2. Jan 17 11:55:34.656982 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 49646 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:34.658201 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:34.662427 systemd-logind[1413]: New session 3 of user core. Jan 17 11:55:34.679052 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 11:55:34.727194 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 17 11:55:34.738270 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:49646.service: Deactivated successfully. Jan 17 11:55:34.739629 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 11:55:34.740839 systemd-logind[1413]: Session 3 logged out. Waiting for processes to exit. Jan 17 11:55:34.741895 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:49658.service - OpenSSH per-connection server daemon (10.0.0.1:49658). Jan 17 11:55:34.742559 systemd-logind[1413]: Removed session 3. Jan 17 11:55:34.777601 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 49658 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:34.778768 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:34.781954 systemd-logind[1413]: New session 4 of user core. Jan 17 11:55:34.795050 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 11:55:34.845593 sshd[1560]: pam_unix(sshd:session): session closed for user core Jan 17 11:55:34.856246 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:49658.service: Deactivated successfully. Jan 17 11:55:34.857557 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 11:55:34.858784 systemd-logind[1413]: Session 4 logged out. Waiting for processes to exit. Jan 17 11:55:34.859867 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). Jan 17 11:55:34.860572 systemd-logind[1413]: Removed session 4. Jan 17 11:55:34.895860 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:34.897130 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:34.901286 systemd-logind[1413]: New session 5 of user core. Jan 17 11:55:34.911099 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 11:55:34.973122 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 11:55:34.975176 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:55:34.990727 sudo[1570]: pam_unix(sudo:session): session closed for user root Jan 17 11:55:34.992483 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 17 11:55:35.006507 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:49666.service: Deactivated successfully. Jan 17 11:55:35.008013 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 11:55:35.010011 systemd-logind[1413]: Session 5 logged out. Waiting for processes to exit. Jan 17 11:55:35.011256 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:49670.service - OpenSSH per-connection server daemon (10.0.0.1:49670). Jan 17 11:55:35.012035 systemd-logind[1413]: Removed session 5. Jan 17 11:55:35.048551 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 49670 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:35.049906 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:35.053948 systemd-logind[1413]: New session 6 of user core. Jan 17 11:55:35.067094 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 11:55:35.118485 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 11:55:35.118769 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:55:35.121719 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 17 11:55:35.126197 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 11:55:35.126463 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:55:35.144182 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 11:55:35.145384 auditctl[1582]: No rules Jan 17 11:55:35.145694 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 11:55:35.145869 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 11:55:35.149196 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 11:55:35.170358 augenrules[1600]: No rules Jan 17 11:55:35.171515 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 11:55:35.172769 sudo[1578]: pam_unix(sudo:session): session closed for user root Jan 17 11:55:35.174281 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 17 11:55:35.186262 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:49670.service: Deactivated successfully. Jan 17 11:55:35.187613 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 11:55:35.188813 systemd-logind[1413]: Session 6 logged out. Waiting for processes to exit. Jan 17 11:55:35.190053 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:49686.service - OpenSSH per-connection server daemon (10.0.0.1:49686). Jan 17 11:55:35.190786 systemd-logind[1413]: Removed session 6. Jan 17 11:55:35.226365 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 49686 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:55:35.227581 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:55:35.231455 systemd-logind[1413]: New session 7 of user core. Jan 17 11:55:35.241108 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 11:55:35.292342 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 11:55:35.292633 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 11:55:35.634186 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 11:55:35.634300 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 11:55:35.893771 dockerd[1629]: time="2025-01-17T11:55:35.893654759Z" level=info msg="Starting up" Jan 17 11:55:36.029115 dockerd[1629]: time="2025-01-17T11:55:36.029066719Z" level=info msg="Loading containers: start." Jan 17 11:55:36.114032 kernel: Initializing XFRM netlink socket Jan 17 11:55:36.175334 systemd-networkd[1373]: docker0: Link UP Jan 17 11:55:36.193107 dockerd[1629]: time="2025-01-17T11:55:36.193053519Z" level=info msg="Loading containers: done." Jan 17 11:55:36.206229 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3083085436-merged.mount: Deactivated successfully. Jan 17 11:55:36.207604 dockerd[1629]: time="2025-01-17T11:55:36.207199479Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 11:55:36.207604 dockerd[1629]: time="2025-01-17T11:55:36.207303759Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 11:55:36.207604 dockerd[1629]: time="2025-01-17T11:55:36.207400599Z" level=info msg="Daemon has completed initialization" Jan 17 11:55:36.233562 dockerd[1629]: time="2025-01-17T11:55:36.233441319Z" level=info msg="API listen on /run/docker.sock" Jan 17 11:55:36.233674 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 11:55:37.042376 containerd[1440]: time="2025-01-17T11:55:37.042337879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 11:55:37.702574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488482934.mount: Deactivated successfully. Jan 17 11:55:38.853309 containerd[1440]: time="2025-01-17T11:55:38.853259559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:38.856302 containerd[1440]: time="2025-01-17T11:55:38.856172959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 17 11:55:38.857669 containerd[1440]: time="2025-01-17T11:55:38.857638119Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:38.860754 containerd[1440]: time="2025-01-17T11:55:38.860695359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:38.862052 containerd[1440]: time="2025-01-17T11:55:38.862006479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.81962492s" Jan 17 11:55:38.862052 containerd[1440]: time="2025-01-17T11:55:38.862050719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 17 11:55:38.881048 containerd[1440]: time="2025-01-17T11:55:38.880983319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 11:55:40.332255 containerd[1440]: time="2025-01-17T11:55:40.331973159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:40.333099 containerd[1440]: time="2025-01-17T11:55:40.332855359Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 17 11:55:40.333790 containerd[1440]: time="2025-01-17T11:55:40.333737479Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:40.336763 containerd[1440]: time="2025-01-17T11:55:40.336701559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:40.337977 containerd[1440]: time="2025-01-17T11:55:40.337948079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.4568968s" Jan 17 11:55:40.338147 containerd[1440]: time="2025-01-17T11:55:40.338050719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 17 11:55:40.356944 containerd[1440]: time="2025-01-17T11:55:40.356911319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 11:55:40.608900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 11:55:40.618089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:40.710095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:40.713569 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:55:40.810809 kubelet[1864]: E0117 11:55:40.810727 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:55:40.813791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:55:40.814005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:55:41.360022 containerd[1440]: time="2025-01-17T11:55:41.359962599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:41.360754 containerd[1440]: time="2025-01-17T11:55:41.360704079Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 17 11:55:41.361333 containerd[1440]: time="2025-01-17T11:55:41.361295679Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:41.364532 containerd[1440]: time="2025-01-17T11:55:41.364496559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:41.365667 containerd[1440]: time="2025-01-17T11:55:41.365633159Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.0086826s" Jan 17 11:55:41.365694 containerd[1440]: time="2025-01-17T11:55:41.365669759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 17 11:55:41.383552 containerd[1440]: time="2025-01-17T11:55:41.383488159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 11:55:42.371049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398273807.mount: Deactivated successfully. Jan 17 11:55:42.563960 containerd[1440]: time="2025-01-17T11:55:42.563907039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:42.564523 containerd[1440]: time="2025-01-17T11:55:42.564486319Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 17 11:55:42.565176 containerd[1440]: time="2025-01-17T11:55:42.565149879Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:42.567088 containerd[1440]: time="2025-01-17T11:55:42.567038719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:42.570972 containerd[1440]: time="2025-01-17T11:55:42.570903759Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.18736256s" Jan 17 11:55:42.570972 containerd[1440]: time="2025-01-17T11:55:42.570962159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 17 11:55:42.588999 containerd[1440]: time="2025-01-17T11:55:42.588948759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 11:55:43.317963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3770723416.mount: Deactivated successfully. Jan 17 11:55:44.008621 containerd[1440]: time="2025-01-17T11:55:44.008573159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.009672 containerd[1440]: time="2025-01-17T11:55:44.009420919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 11:55:44.010459 containerd[1440]: time="2025-01-17T11:55:44.010426799Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.013379 containerd[1440]: time="2025-01-17T11:55:44.013345959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.014739 containerd[1440]: time="2025-01-17T11:55:44.014606319Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.42552716s" Jan 17 11:55:44.014739 containerd[1440]: time="2025-01-17T11:55:44.014644199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 11:55:44.033150 containerd[1440]: time="2025-01-17T11:55:44.033115439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 11:55:44.469019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170346903.mount: Deactivated successfully. Jan 17 11:55:44.474285 containerd[1440]: time="2025-01-17T11:55:44.473964519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.475090 containerd[1440]: time="2025-01-17T11:55:44.475049159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 17 11:55:44.475962 containerd[1440]: time="2025-01-17T11:55:44.475902079Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.477933 containerd[1440]: time="2025-01-17T11:55:44.477881039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:44.479002 containerd[1440]: time="2025-01-17T11:55:44.478968039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 445.81664ms" Jan 17 11:55:44.479002 containerd[1440]: time="2025-01-17T11:55:44.478999999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 11:55:44.496887 containerd[1440]: time="2025-01-17T11:55:44.496861239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 11:55:45.153664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389437556.mount: Deactivated successfully. Jan 17 11:55:46.807815 containerd[1440]: time="2025-01-17T11:55:46.807752479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:46.808583 containerd[1440]: time="2025-01-17T11:55:46.808497319Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 17 11:55:46.809249 containerd[1440]: time="2025-01-17T11:55:46.809213159Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:46.812598 containerd[1440]: time="2025-01-17T11:55:46.812566679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:55:46.814004 containerd[1440]: time="2025-01-17T11:55:46.813968719Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.3170774s" Jan 17 11:55:46.814044 containerd[1440]: time="2025-01-17T11:55:46.814005999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 17 11:55:50.858825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 11:55:50.868107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:50.990883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:50.994874 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 11:55:51.033007 kubelet[2089]: E0117 11:55:51.032964 2089 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 11:55:51.035337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 11:55:51.035483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 11:55:51.270035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:51.281210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:51.300599 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit session-7.scope)... Jan 17 11:55:51.300617 systemd[1]: Reloading... Jan 17 11:55:51.368947 zram_generator::config[2142]: No configuration found. Jan 17 11:55:51.508224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:55:51.562955 systemd[1]: Reloading finished in 262 ms. Jan 17 11:55:51.606001 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 11:55:51.606068 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 11:55:51.606994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:51.609241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:51.703113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:51.706584 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:55:51.745366 kubelet[2188]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:55:51.745366 kubelet[2188]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:55:51.745366 kubelet[2188]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:55:51.745676 kubelet[2188]: I0117 11:55:51.745518 2188 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:55:53.019779 kubelet[2188]: I0117 11:55:53.019158 2188 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 11:55:53.019779 kubelet[2188]: I0117 11:55:53.019194 2188 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:55:53.019779 kubelet[2188]: I0117 11:55:53.019506 2188 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 11:55:53.065743 kubelet[2188]: E0117 11:55:53.065716 2188 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.065876 kubelet[2188]: I0117 11:55:53.065767 2188 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:55:53.075060 kubelet[2188]: I0117 11:55:53.075034 2188 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:55:53.076196 kubelet[2188]: I0117 11:55:53.076160 2188 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:55:53.076395 kubelet[2188]: I0117 11:55:53.076197 2188 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 11:55:53.076477 kubelet[2188]: I0117 11:55:53.076458 2188 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:55:53.076477 kubelet[2188]: I0117 11:55:53.076468 2188 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 11:55:53.076718 kubelet[2188]: I0117 11:55:53.076704 2188 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:55:53.077843 kubelet[2188]: I0117 11:55:53.077772 2188 kubelet.go:400] "Attempting to sync node with API server" Jan 17 11:55:53.077843 kubelet[2188]: I0117 11:55:53.077792 2188 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:55:53.078852 kubelet[2188]: I0117 11:55:53.077934 2188 kubelet.go:312] "Adding apiserver pod source" Jan 17 11:55:53.078852 kubelet[2188]: I0117 11:55:53.078122 2188 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:55:53.078852 kubelet[2188]: W0117 11:55:53.078612 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.078852 kubelet[2188]: E0117 11:55:53.078661 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.078852 kubelet[2188]: W0117 11:55:53.078705 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.078852 kubelet[2188]: E0117 11:55:53.078729 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.081051 kubelet[2188]: I0117 11:55:53.080966 2188 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:55:53.081375 kubelet[2188]: I0117 11:55:53.081342 2188 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:55:53.081588 kubelet[2188]: W0117 11:55:53.081575 2188 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 11:55:53.084198 kubelet[2188]: I0117 11:55:53.084114 2188 server.go:1264] "Started kubelet" Jan 17 11:55:53.084841 kubelet[2188]: I0117 11:55:53.084242 2188 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:55:53.085847 kubelet[2188]: I0117 11:55:53.085720 2188 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:55:53.085907 kubelet[2188]: I0117 11:55:53.085882 2188 server.go:455] "Adding debug handlers to kubelet server" Jan 17 11:55:53.087504 kubelet[2188]: I0117 11:55:53.087187 2188 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:55:53.087504 kubelet[2188]: I0117 11:55:53.087401 2188 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:55:53.089895 kubelet[2188]: E0117 11:55:53.089867 2188 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 11:55:53.090035 kubelet[2188]: I0117 11:55:53.090017 2188 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 11:55:53.090495 kubelet[2188]: I0117 11:55:53.090471 2188 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 11:55:53.091310 kubelet[2188]: E0117 11:55:53.091118 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Jan 17 11:55:53.091862 kubelet[2188]: I0117 11:55:53.091836 2188 reconciler.go:26] "Reconciler: start to sync state" Jan 17 11:55:53.092233 kubelet[2188]: W0117 11:55:53.092181 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.092283 kubelet[2188]: E0117 11:55:53.092244 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.093074 kubelet[2188]: I0117 11:55:53.093041 2188 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:55:53.093377 kubelet[2188]: I0117 11:55:53.093136 2188 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:55:53.093377 kubelet[2188]: E0117 11:55:53.093157 2188 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b78d8a098a4e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 11:55:53.084085479 +0000 UTC m=+1.374685921,LastTimestamp:2025-01-17 11:55:53.084085479 +0000 UTC m=+1.374685921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 11:55:53.095051 kubelet[2188]: I0117 11:55:53.095026 2188 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:55:53.101105 kubelet[2188]: E0117 11:55:53.101084 2188 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:55:53.110207 kubelet[2188]: I0117 11:55:53.110187 2188 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:55:53.110207 kubelet[2188]: I0117 11:55:53.110202 2188 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:55:53.110322 kubelet[2188]: I0117 11:55:53.110225 2188 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:55:53.110891 kubelet[2188]: I0117 11:55:53.110853 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:55:53.112026 kubelet[2188]: I0117 11:55:53.111962 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:55:53.112026 kubelet[2188]: I0117 11:55:53.112024 2188 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:55:53.112115 kubelet[2188]: I0117 11:55:53.112042 2188 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 11:55:53.112115 kubelet[2188]: E0117 11:55:53.112083 2188 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:55:53.112720 kubelet[2188]: W0117 11:55:53.112608 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.112720 kubelet[2188]: E0117 11:55:53.112660 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.191652 kubelet[2188]: I0117 11:55:53.191607 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:53.191958 kubelet[2188]: E0117 11:55:53.191903 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 17 11:55:53.212418 kubelet[2188]: E0117 11:55:53.212375 2188 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 11:55:53.215692 kubelet[2188]: I0117 11:55:53.215652 2188 policy_none.go:49] "None policy: Start" Jan 17 11:55:53.216956 kubelet[2188]: I0117 11:55:53.216925 2188 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:55:53.216956 kubelet[2188]: I0117 11:55:53.216955 2188 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:55:53.223881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 11:55:53.237798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 11:55:53.244752 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 11:55:53.255282 kubelet[2188]: I0117 11:55:53.254861 2188 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:55:53.255282 kubelet[2188]: I0117 11:55:53.255071 2188 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 11:55:53.255282 kubelet[2188]: I0117 11:55:53.255169 2188 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:55:53.256370 kubelet[2188]: E0117 11:55:53.256347 2188 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 11:55:53.291652 kubelet[2188]: E0117 11:55:53.291558 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Jan 17 11:55:53.393617 kubelet[2188]: I0117 11:55:53.393589 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:53.393943 kubelet[2188]: E0117 11:55:53.393894 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 17 11:55:53.413096 kubelet[2188]: I0117 11:55:53.413050 2188 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 11:55:53.414224 kubelet[2188]: I0117 11:55:53.414020 2188 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 11:55:53.415150 kubelet[2188]: I0117 11:55:53.414997 2188 topology_manager.go:215] "Topology Admit Handler" podUID="85c4b3de0a85712829077812fe8b5c22" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 11:55:53.420976 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 11:55:53.444734 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 11:55:53.449376 systemd[1]: Created slice kubepods-burstable-pod85c4b3de0a85712829077812fe8b5c22.slice - libcontainer container kubepods-burstable-pod85c4b3de0a85712829077812fe8b5c22.slice. Jan 17 11:55:53.493863 kubelet[2188]: I0117 11:55:53.493798 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:53.493863 kubelet[2188]: I0117 11:55:53.493835 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:53.493863 kubelet[2188]: I0117 11:55:53.493855 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:55:53.493863 kubelet[2188]: I0117 11:55:53.493872 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:53.494083 kubelet[2188]: I0117 11:55:53.493887 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:53.494083 kubelet[2188]: I0117 11:55:53.493904 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:53.494083 kubelet[2188]: I0117 11:55:53.493936 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:53.494083 kubelet[2188]: I0117 11:55:53.493952 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:53.494083 kubelet[2188]: I0117 11:55:53.493966 2188 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:53.692206 kubelet[2188]: E0117 11:55:53.692150 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Jan 17 11:55:53.742054 kubelet[2188]: E0117 11:55:53.742013 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:53.742742 containerd[1440]: time="2025-01-17T11:55:53.742645799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 11:55:53.748922 kubelet[2188]: E0117 11:55:53.748890 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:53.749335 containerd[1440]: time="2025-01-17T11:55:53.749296559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 11:55:53.751793 kubelet[2188]: E0117 11:55:53.751755 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:53.752248 containerd[1440]: time="2025-01-17T11:55:53.752110479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85c4b3de0a85712829077812fe8b5c22,Namespace:kube-system,Attempt:0,}" Jan 17 11:55:53.795294 kubelet[2188]: I0117 11:55:53.795246 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:53.796051 kubelet[2188]: E0117 11:55:53.795612 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 17 11:55:53.910820 kubelet[2188]: W0117 11:55:53.910755 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.910820 kubelet[2188]: E0117 11:55:53.910820 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.930289 kubelet[2188]: W0117 11:55:53.930149 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:53.930289 kubelet[2188]: E0117 11:55:53.930192 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:54.204145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829806342.mount: Deactivated successfully. Jan 17 11:55:54.208788 containerd[1440]: time="2025-01-17T11:55:54.208728919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:55:54.210371 containerd[1440]: time="2025-01-17T11:55:54.210325759Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:55:54.211772 containerd[1440]: time="2025-01-17T11:55:54.211733759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 11:55:54.212281 containerd[1440]: time="2025-01-17T11:55:54.212255879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:55:54.212967 containerd[1440]: time="2025-01-17T11:55:54.212939719Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:55:54.214014 containerd[1440]: time="2025-01-17T11:55:54.213983559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 11:55:54.214357 containerd[1440]: time="2025-01-17T11:55:54.214321559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:55:54.217877 containerd[1440]: time="2025-01-17T11:55:54.217838919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 11:55:54.218878 containerd[1440]: time="2025-01-17T11:55:54.218842199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.33272ms" Jan 17 11:55:54.219602 containerd[1440]: time="2025-01-17T11:55:54.219471439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.74804ms" Jan 17 11:55:54.222045 containerd[1440]: time="2025-01-17T11:55:54.222005519Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.83344ms" Jan 17 11:55:54.347690 containerd[1440]: time="2025-01-17T11:55:54.347610679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:55:54.347690 containerd[1440]: time="2025-01-17T11:55:54.347669119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:55:54.347690 containerd[1440]: time="2025-01-17T11:55:54.347685399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.347940 containerd[1440]: time="2025-01-17T11:55:54.347760639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.348639 containerd[1440]: time="2025-01-17T11:55:54.348455279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:55:54.348713 containerd[1440]: time="2025-01-17T11:55:54.348677319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:55:54.348713 containerd[1440]: time="2025-01-17T11:55:54.348697879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.348960 containerd[1440]: time="2025-01-17T11:55:54.348779639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.349114 containerd[1440]: time="2025-01-17T11:55:54.348903999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:55:54.349161 containerd[1440]: time="2025-01-17T11:55:54.349101639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:55:54.349161 containerd[1440]: time="2025-01-17T11:55:54.349119839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.349241 containerd[1440]: time="2025-01-17T11:55:54.349195319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:55:54.376081 systemd[1]: Started cri-containerd-847d4853bbd992c804ffcb2670debc8b38a81f41fc92ee2992f2c89f1207a23b.scope - libcontainer container 847d4853bbd992c804ffcb2670debc8b38a81f41fc92ee2992f2c89f1207a23b. Jan 17 11:55:54.377287 systemd[1]: Started cri-containerd-c189b63167f34d910be33ea6ef08b95310ac2d57df694a83222931b7f5f20e46.scope - libcontainer container c189b63167f34d910be33ea6ef08b95310ac2d57df694a83222931b7f5f20e46. Jan 17 11:55:54.380474 systemd[1]: Started cri-containerd-e6771823ecf0ff1bb90cc0e6047ec7b79c435e456764d00203ab9448c6602896.scope - libcontainer container e6771823ecf0ff1bb90cc0e6047ec7b79c435e456764d00203ab9448c6602896. Jan 17 11:55:54.410148 containerd[1440]: time="2025-01-17T11:55:54.410097119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c189b63167f34d910be33ea6ef08b95310ac2d57df694a83222931b7f5f20e46\"" Jan 17 11:55:54.411479 kubelet[2188]: E0117 11:55:54.411392 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:54.412984 containerd[1440]: time="2025-01-17T11:55:54.412955359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"847d4853bbd992c804ffcb2670debc8b38a81f41fc92ee2992f2c89f1207a23b\"" Jan 17 11:55:54.413568 kubelet[2188]: E0117 11:55:54.413477 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:54.414594 containerd[1440]: time="2025-01-17T11:55:54.414567559Z" level=info msg="CreateContainer within sandbox \"c189b63167f34d910be33ea6ef08b95310ac2d57df694a83222931b7f5f20e46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 11:55:54.415062 containerd[1440]: time="2025-01-17T11:55:54.415027439Z" level=info msg="CreateContainer within sandbox \"847d4853bbd992c804ffcb2670debc8b38a81f41fc92ee2992f2c89f1207a23b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 11:55:54.420869 containerd[1440]: time="2025-01-17T11:55:54.420774719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:85c4b3de0a85712829077812fe8b5c22,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6771823ecf0ff1bb90cc0e6047ec7b79c435e456764d00203ab9448c6602896\"" Jan 17 11:55:54.421370 kubelet[2188]: E0117 11:55:54.421344 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:54.423675 containerd[1440]: time="2025-01-17T11:55:54.423643399Z" level=info msg="CreateContainer within sandbox \"e6771823ecf0ff1bb90cc0e6047ec7b79c435e456764d00203ab9448c6602896\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 11:55:54.437431 containerd[1440]: time="2025-01-17T11:55:54.437388399Z" level=info msg="CreateContainer within sandbox \"c189b63167f34d910be33ea6ef08b95310ac2d57df694a83222931b7f5f20e46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df0779d9a9001b98b33aa4af5ec45b464b5337038e7de35e8f4f5b32e490c47e\"" Jan 17 11:55:54.440163 containerd[1440]: time="2025-01-17T11:55:54.440136279Z" level=info msg="StartContainer for \"df0779d9a9001b98b33aa4af5ec45b464b5337038e7de35e8f4f5b32e490c47e\"" Jan 17 11:55:54.440937 containerd[1440]: time="2025-01-17T11:55:54.440846399Z" level=info msg="CreateContainer within sandbox \"e6771823ecf0ff1bb90cc0e6047ec7b79c435e456764d00203ab9448c6602896\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"46935e8e072030f6267a26f723650cfbff1107049a2d6211cb571e668caf43d5\"" Jan 17 11:55:54.441391 containerd[1440]: time="2025-01-17T11:55:54.441284719Z" level=info msg="StartContainer for \"46935e8e072030f6267a26f723650cfbff1107049a2d6211cb571e668caf43d5\"" Jan 17 11:55:54.442453 containerd[1440]: time="2025-01-17T11:55:54.442407039Z" level=info msg="CreateContainer within sandbox \"847d4853bbd992c804ffcb2670debc8b38a81f41fc92ee2992f2c89f1207a23b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e4bc03cba20a030cdab9741c5f367e1543189b64bb053c7d8cffb869f423b8b\"" Jan 17 11:55:54.443696 containerd[1440]: time="2025-01-17T11:55:54.442827999Z" level=info msg="StartContainer for \"1e4bc03cba20a030cdab9741c5f367e1543189b64bb053c7d8cffb869f423b8b\"" Jan 17 11:55:54.469081 systemd[1]: Started cri-containerd-1e4bc03cba20a030cdab9741c5f367e1543189b64bb053c7d8cffb869f423b8b.scope - libcontainer container 1e4bc03cba20a030cdab9741c5f367e1543189b64bb053c7d8cffb869f423b8b. Jan 17 11:55:54.470285 systemd[1]: Started cri-containerd-46935e8e072030f6267a26f723650cfbff1107049a2d6211cb571e668caf43d5.scope - libcontainer container 46935e8e072030f6267a26f723650cfbff1107049a2d6211cb571e668caf43d5. Jan 17 11:55:54.471257 systemd[1]: Started cri-containerd-df0779d9a9001b98b33aa4af5ec45b464b5337038e7de35e8f4f5b32e490c47e.scope - libcontainer container df0779d9a9001b98b33aa4af5ec45b464b5337038e7de35e8f4f5b32e490c47e. Jan 17 11:55:54.493706 kubelet[2188]: E0117 11:55:54.493573 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="1.6s" Jan 17 11:55:54.520606 containerd[1440]: time="2025-01-17T11:55:54.515318759Z" level=info msg="StartContainer for \"1e4bc03cba20a030cdab9741c5f367e1543189b64bb053c7d8cffb869f423b8b\" returns successfully" Jan 17 11:55:54.520606 containerd[1440]: time="2025-01-17T11:55:54.515346479Z" level=info msg="StartContainer for \"46935e8e072030f6267a26f723650cfbff1107049a2d6211cb571e668caf43d5\" returns successfully" Jan 17 11:55:54.520606 containerd[1440]: time="2025-01-17T11:55:54.515336719Z" level=info msg="StartContainer for \"df0779d9a9001b98b33aa4af5ec45b464b5337038e7de35e8f4f5b32e490c47e\" returns successfully" Jan 17 11:55:54.549740 kubelet[2188]: W0117 11:55:54.549686 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:54.549830 kubelet[2188]: E0117 11:55:54.549750 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:54.604302 kubelet[2188]: I0117 11:55:54.604273 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:54.604541 kubelet[2188]: E0117 11:55:54.604517 2188 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Jan 17 11:55:54.638190 kubelet[2188]: W0117 11:55:54.638133 2188 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:54.638266 kubelet[2188]: E0117 11:55:54.638196 2188 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Jan 17 11:55:55.120101 kubelet[2188]: E0117 11:55:55.120067 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:55.124157 kubelet[2188]: E0117 11:55:55.124139 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:55.124926 kubelet[2188]: E0117 11:55:55.124897 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:56.080072 kubelet[2188]: I0117 11:55:56.080027 2188 apiserver.go:52] "Watching apiserver" Jan 17 11:55:56.091615 kubelet[2188]: I0117 11:55:56.091574 2188 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 11:55:56.096608 kubelet[2188]: E0117 11:55:56.096586 2188 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 11:55:56.126423 kubelet[2188]: E0117 11:55:56.126385 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:56.126956 kubelet[2188]: E0117 11:55:56.126939 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:56.131598 kubelet[2188]: E0117 11:55:56.131570 2188 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 11:55:56.205520 kubelet[2188]: I0117 11:55:56.205451 2188 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:56.214485 kubelet[2188]: I0117 11:55:56.214370 2188 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 11:55:57.670297 systemd[1]: Reloading requested from client PID 2466 ('systemctl') (unit session-7.scope)... Jan 17 11:55:57.670312 systemd[1]: Reloading... Jan 17 11:55:57.737096 zram_generator::config[2505]: No configuration found. Jan 17 11:55:57.822425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 11:55:57.889173 systemd[1]: Reloading finished in 218 ms. Jan 17 11:55:57.927015 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:57.927384 kubelet[2188]: E0117 11:55:57.927033 2188 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.181b78d8a098a4e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 11:55:53.084085479 +0000 UTC m=+1.374685921,LastTimestamp:2025-01-17 11:55:53.084085479 +0000 UTC m=+1.374685921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 11:55:57.942741 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 11:55:57.943007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:57.943053 systemd[1]: kubelet.service: Consumed 1.707s CPU time, 114.5M memory peak, 0B memory swap peak. Jan 17 11:55:57.952461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 11:55:58.039422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 11:55:58.042846 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 11:55:58.080216 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:55:58.080216 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 11:55:58.080216 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 11:55:58.080603 kubelet[2547]: I0117 11:55:58.080247 2547 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 11:55:58.084259 kubelet[2547]: I0117 11:55:58.084218 2547 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 11:55:58.084259 kubelet[2547]: I0117 11:55:58.084244 2547 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 11:55:58.084416 kubelet[2547]: I0117 11:55:58.084401 2547 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 11:55:58.085706 kubelet[2547]: I0117 11:55:58.085622 2547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 11:55:58.087246 kubelet[2547]: I0117 11:55:58.086707 2547 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 11:55:58.095708 kubelet[2547]: I0117 11:55:58.095682 2547 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 11:55:58.095908 kubelet[2547]: I0117 11:55:58.095867 2547 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 11:55:58.096086 kubelet[2547]: I0117 11:55:58.095898 2547 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 11:55:58.096086 kubelet[2547]: I0117 11:55:58.096086 2547 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 11:55:58.096212 kubelet[2547]: I0117 11:55:58.096095 2547 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 11:55:58.096212 kubelet[2547]: I0117 11:55:58.096125 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:55:58.096263 kubelet[2547]: I0117 11:55:58.096230 2547 kubelet.go:400] "Attempting to sync node with API server" Jan 17 11:55:58.096263 kubelet[2547]: I0117 11:55:58.096242 2547 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 11:55:58.096299 kubelet[2547]: I0117 11:55:58.096279 2547 kubelet.go:312] "Adding apiserver pod source" Jan 17 11:55:58.096299 kubelet[2547]: I0117 11:55:58.096294 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.096976 2547 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.097143 2547 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.097482 2547 server.go:1264] "Started kubelet" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.098248 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.098475 2547 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.098513 2547 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.099179 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 11:55:58.099646 kubelet[2547]: I0117 11:55:58.099326 2547 server.go:455] "Adding debug handlers to kubelet server" Jan 17 11:55:58.100278 kubelet[2547]: I0117 11:55:58.100249 2547 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 11:55:58.100339 kubelet[2547]: I0117 11:55:58.100330 2547 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 11:55:58.100483 kubelet[2547]: I0117 11:55:58.100458 2547 reconciler.go:26] "Reconciler: start to sync state" Jan 17 11:55:58.102575 kubelet[2547]: I0117 11:55:58.102231 2547 factory.go:221] Registration of the systemd container factory successfully Jan 17 11:55:58.102575 kubelet[2547]: I0117 11:55:58.102343 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 11:55:58.102831 kubelet[2547]: E0117 11:55:58.102809 2547 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 11:55:58.103826 kubelet[2547]: I0117 11:55:58.103805 2547 factory.go:221] Registration of the containerd container factory successfully Jan 17 11:55:58.119568 kubelet[2547]: I0117 11:55:58.119526 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 11:55:58.124596 kubelet[2547]: I0117 11:55:58.124568 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 11:55:58.124596 kubelet[2547]: I0117 11:55:58.124600 2547 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 11:55:58.124703 kubelet[2547]: I0117 11:55:58.124613 2547 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 11:55:58.124703 kubelet[2547]: E0117 11:55:58.124648 2547 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 11:55:58.149372 kubelet[2547]: I0117 11:55:58.149344 2547 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 11:55:58.149372 kubelet[2547]: I0117 11:55:58.149362 2547 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 11:55:58.149372 kubelet[2547]: I0117 11:55:58.149378 2547 state_mem.go:36] "Initialized new in-memory state store" Jan 17 11:55:58.149564 kubelet[2547]: I0117 11:55:58.149507 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 11:55:58.149564 kubelet[2547]: I0117 11:55:58.149523 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 11:55:58.149564 kubelet[2547]: I0117 11:55:58.149539 2547 policy_none.go:49] "None policy: Start" Jan 17 11:55:58.150731 kubelet[2547]: I0117 11:55:58.150699 2547 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 11:55:58.150731 kubelet[2547]: I0117 11:55:58.150724 2547 state_mem.go:35] "Initializing new in-memory state store" Jan 17 11:55:58.150906 kubelet[2547]: I0117 11:55:58.150853 2547 state_mem.go:75] "Updated machine memory state" Jan 17 11:55:58.154633 kubelet[2547]: I0117 11:55:58.154608 2547 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 11:55:58.155900 kubelet[2547]: I0117 11:55:58.155179 2547 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 11:55:58.155900 kubelet[2547]: I0117 11:55:58.155814 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 11:55:58.205018 kubelet[2547]: I0117 11:55:58.203905 2547 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 11:55:58.211207 kubelet[2547]: I0117 11:55:58.211038 2547 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 11:55:58.211207 kubelet[2547]: I0117 11:55:58.211110 2547 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 11:55:58.225072 kubelet[2547]: I0117 11:55:58.225017 2547 topology_manager.go:215] "Topology Admit Handler" podUID="85c4b3de0a85712829077812fe8b5c22" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 11:55:58.225151 kubelet[2547]: I0117 11:55:58.225130 2547 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 11:55:58.225490 kubelet[2547]: I0117 11:55:58.225180 2547 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 11:55:58.401189 kubelet[2547]: I0117 11:55:58.401144 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:58.401189 kubelet[2547]: I0117 11:55:58.401192 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:58.401299 kubelet[2547]: I0117 11:55:58.401212 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:58.401299 kubelet[2547]: I0117 11:55:58.401229 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:58.401299 kubelet[2547]: I0117 11:55:58.401265 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:58.401299 kubelet[2547]: I0117 11:55:58.401280 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:58.401299 kubelet[2547]: I0117 11:55:58.401296 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:58.401419 kubelet[2547]: I0117 11:55:58.401312 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 11:55:58.401419 kubelet[2547]: I0117 11:55:58.401327 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85c4b3de0a85712829077812fe8b5c22-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"85c4b3de0a85712829077812fe8b5c22\") " pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:58.541533 kubelet[2547]: E0117 11:55:58.541374 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:58.543360 kubelet[2547]: E0117 11:55:58.543331 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:58.543788 kubelet[2547]: E0117 11:55:58.543754 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:58.668510 sudo[2586]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 11:55:58.668786 sudo[2586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 11:55:59.091061 sudo[2586]: pam_unix(sudo:session): session closed for user root Jan 17 11:55:59.098144 kubelet[2547]: I0117 11:55:59.096842 2547 apiserver.go:52] "Watching apiserver" Jan 17 11:55:59.101562 kubelet[2547]: I0117 11:55:59.101417 2547 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 11:55:59.137478 kubelet[2547]: E0117 11:55:59.137439 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:59.150785 kubelet[2547]: E0117 11:55:59.150704 2547 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 11:55:59.150785 kubelet[2547]: E0117 11:55:59.150734 2547 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 11:55:59.151165 kubelet[2547]: E0117 11:55:59.151118 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:59.151206 kubelet[2547]: E0117 11:55:59.151169 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:55:59.170570 kubelet[2547]: I0117 11:55:59.170514 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.170497919 podStartE2EDuration="1.170497919s" podCreationTimestamp="2025-01-17 11:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:55:59.164254879 +0000 UTC m=+1.118027681" watchObservedRunningTime="2025-01-17 11:55:59.170497919 +0000 UTC m=+1.124270761" Jan 17 11:55:59.170710 kubelet[2547]: I0117 11:55:59.170604 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.170600359 podStartE2EDuration="1.170600359s" podCreationTimestamp="2025-01-17 11:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:55:59.170354599 +0000 UTC m=+1.124127441" watchObservedRunningTime="2025-01-17 11:55:59.170600359 +0000 UTC m=+1.124373201" Jan 17 11:55:59.179797 kubelet[2547]: I0117 11:55:59.179749 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.179736919 podStartE2EDuration="1.179736919s" podCreationTimestamp="2025-01-17 11:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:55:59.179611999 +0000 UTC m=+1.133384841" watchObservedRunningTime="2025-01-17 11:55:59.179736919 +0000 UTC m=+1.133509721" Jan 17 11:56:00.139968 kubelet[2547]: E0117 11:56:00.139733 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:00.139968 kubelet[2547]: E0117 11:56:00.139835 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:00.871798 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 17 11:56:00.873423 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:00.877431 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:49686.service: Deactivated successfully. Jan 17 11:56:00.879865 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 11:56:00.880094 systemd[1]: session-7.scope: Consumed 7.009s CPU time, 188.4M memory peak, 0B memory swap peak. Jan 17 11:56:00.880635 systemd-logind[1413]: Session 7 logged out. Waiting for processes to exit. Jan 17 11:56:00.881633 systemd-logind[1413]: Removed session 7. Jan 17 11:56:01.139989 kubelet[2547]: E0117 11:56:01.139868 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:01.340670 kubelet[2547]: E0117 11:56:01.340598 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:06.821820 kubelet[2547]: E0117 11:56:06.821791 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:07.150758 kubelet[2547]: E0117 11:56:07.150733 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:07.541369 kubelet[2547]: E0117 11:56:07.541146 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:08.151407 kubelet[2547]: E0117 11:56:08.151366 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:09.153683 kubelet[2547]: E0117 11:56:09.153230 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:11.345136 kubelet[2547]: E0117 11:56:11.345047 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:13.633693 update_engine[1418]: I20250117 11:56:13.633630 1418 update_attempter.cc:509] Updating boot flags... Jan 17 11:56:13.660074 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2636) Jan 17 11:56:13.692443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2639) Jan 17 11:56:13.719978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2639) Jan 17 11:56:14.056096 kubelet[2547]: I0117 11:56:14.055967 2547 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 11:56:14.057032 containerd[1440]: time="2025-01-17T11:56:14.056993000Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 11:56:14.057367 kubelet[2547]: I0117 11:56:14.057223 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 11:56:14.154866 kubelet[2547]: I0117 11:56:14.154800 2547 topology_manager.go:215] "Topology Admit Handler" podUID="1eb160a8-a886-43fd-b4e3-16cf9ebc2486" podNamespace="kube-system" podName="kube-proxy-jxgc7" Jan 17 11:56:14.162523 kubelet[2547]: I0117 11:56:14.162484 2547 topology_manager.go:215] "Topology Admit Handler" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" podNamespace="kube-system" podName="cilium-lwcjq" Jan 17 11:56:14.170388 systemd[1]: Created slice kubepods-besteffort-pod1eb160a8_a886_43fd_b4e3_16cf9ebc2486.slice - libcontainer container kubepods-besteffort-pod1eb160a8_a886_43fd_b4e3_16cf9ebc2486.slice. Jan 17 11:56:14.186675 systemd[1]: Created slice kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice - libcontainer container kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice. Jan 17 11:56:14.214670 kubelet[2547]: I0117 11:56:14.214636 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1eb160a8-a886-43fd-b4e3-16cf9ebc2486-kube-proxy\") pod \"kube-proxy-jxgc7\" (UID: \"1eb160a8-a886-43fd-b4e3-16cf9ebc2486\") " pod="kube-system/kube-proxy-jxgc7" Jan 17 11:56:14.214894 kubelet[2547]: I0117 11:56:14.214875 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-cgroup\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215046 kubelet[2547]: I0117 11:56:14.215022 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13588dc8-6163-402e-85fd-bedbe38684ff-clustermesh-secrets\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215162 kubelet[2547]: I0117 11:56:14.215144 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfs26\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-kube-api-access-xfs26\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215258 kubelet[2547]: I0117 11:56:14.215243 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-run\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215349 kubelet[2547]: I0117 11:56:14.215334 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eb160a8-a886-43fd-b4e3-16cf9ebc2486-xtables-lock\") pod \"kube-proxy-jxgc7\" (UID: \"1eb160a8-a886-43fd-b4e3-16cf9ebc2486\") " pod="kube-system/kube-proxy-jxgc7" Jan 17 11:56:14.215441 kubelet[2547]: I0117 11:56:14.215413 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbcmc\" (UniqueName: \"kubernetes.io/projected/1eb160a8-a886-43fd-b4e3-16cf9ebc2486-kube-api-access-xbcmc\") pod \"kube-proxy-jxgc7\" (UID: \"1eb160a8-a886-43fd-b4e3-16cf9ebc2486\") " pod="kube-system/kube-proxy-jxgc7" Jan 17 11:56:14.215531 kubelet[2547]: I0117 11:56:14.215515 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-bpf-maps\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215597 kubelet[2547]: I0117 11:56:14.215584 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-lib-modules\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215652 kubelet[2547]: I0117 11:56:14.215641 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cni-path\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215706 kubelet[2547]: I0117 11:56:14.215696 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-etc-cni-netd\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215781 kubelet[2547]: I0117 11:56:14.215769 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eb160a8-a886-43fd-b4e3-16cf9ebc2486-lib-modules\") pod \"kube-proxy-jxgc7\" (UID: \"1eb160a8-a886-43fd-b4e3-16cf9ebc2486\") " pod="kube-system/kube-proxy-jxgc7" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215835 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-hostproc\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215858 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-hubble-tls\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215875 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-config-path\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215910 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-xtables-lock\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215947 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-net\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.215993 kubelet[2547]: I0117 11:56:14.215967 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-kernel\") pod \"cilium-lwcjq\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " pod="kube-system/cilium-lwcjq" Jan 17 11:56:14.348611 kubelet[2547]: I0117 11:56:14.348494 2547 topology_manager.go:215] "Topology Admit Handler" podUID="60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" podNamespace="kube-system" podName="cilium-operator-599987898-bhnzp" Jan 17 11:56:14.361097 systemd[1]: Created slice kubepods-besteffort-pod60b1dc8b_14f4_44bd_bbcb_4a44e213a7d3.slice - libcontainer container kubepods-besteffort-pod60b1dc8b_14f4_44bd_bbcb_4a44e213a7d3.slice. Jan 17 11:56:14.417843 kubelet[2547]: I0117 11:56:14.417794 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-cilium-config-path\") pod \"cilium-operator-599987898-bhnzp\" (UID: \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\") " pod="kube-system/cilium-operator-599987898-bhnzp" Jan 17 11:56:14.417843 kubelet[2547]: I0117 11:56:14.417841 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk85\" (UniqueName: \"kubernetes.io/projected/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-kube-api-access-ggk85\") pod \"cilium-operator-599987898-bhnzp\" (UID: \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\") " pod="kube-system/cilium-operator-599987898-bhnzp" Jan 17 11:56:14.481311 kubelet[2547]: E0117 11:56:14.481219 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:14.487381 containerd[1440]: time="2025-01-17T11:56:14.487340478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxgc7,Uid:1eb160a8-a886-43fd-b4e3-16cf9ebc2486,Namespace:kube-system,Attempt:0,}" Jan 17 11:56:14.488627 kubelet[2547]: E0117 11:56:14.488593 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:14.489065 containerd[1440]: time="2025-01-17T11:56:14.489031654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lwcjq,Uid:13588dc8-6163-402e-85fd-bedbe38684ff,Namespace:kube-system,Attempt:0,}" Jan 17 11:56:14.513705 containerd[1440]: time="2025-01-17T11:56:14.513511357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:56:14.513705 containerd[1440]: time="2025-01-17T11:56:14.513560357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:56:14.513705 containerd[1440]: time="2025-01-17T11:56:14.513570557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.513705 containerd[1440]: time="2025-01-17T11:56:14.513663558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.515566 containerd[1440]: time="2025-01-17T11:56:14.515383854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:56:14.515566 containerd[1440]: time="2025-01-17T11:56:14.515438774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:56:14.517490 containerd[1440]: time="2025-01-17T11:56:14.517425112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.517675 containerd[1440]: time="2025-01-17T11:56:14.517647994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.546132 systemd[1]: Started cri-containerd-74b9a63b5f314ef009ce5078dc259575bf5af223671f78b8fdae148cfd1948b0.scope - libcontainer container 74b9a63b5f314ef009ce5078dc259575bf5af223671f78b8fdae148cfd1948b0. Jan 17 11:56:14.547462 systemd[1]: Started cri-containerd-86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6.scope - libcontainer container 86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6. Jan 17 11:56:14.567145 containerd[1440]: time="2025-01-17T11:56:14.567023964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lwcjq,Uid:13588dc8-6163-402e-85fd-bedbe38684ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\"" Jan 17 11:56:14.570343 containerd[1440]: time="2025-01-17T11:56:14.570311834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxgc7,Uid:1eb160a8-a886-43fd-b4e3-16cf9ebc2486,Namespace:kube-system,Attempt:0,} returns sandbox id \"74b9a63b5f314ef009ce5078dc259575bf5af223671f78b8fdae148cfd1948b0\"" Jan 17 11:56:14.570938 kubelet[2547]: E0117 11:56:14.570890 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:14.571093 kubelet[2547]: E0117 11:56:14.571010 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:14.572389 containerd[1440]: time="2025-01-17T11:56:14.572357012Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 11:56:14.574066 containerd[1440]: time="2025-01-17T11:56:14.574028228Z" level=info msg="CreateContainer within sandbox \"74b9a63b5f314ef009ce5078dc259575bf5af223671f78b8fdae148cfd1948b0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 11:56:14.595488 containerd[1440]: time="2025-01-17T11:56:14.595404862Z" level=info msg="CreateContainer within sandbox \"74b9a63b5f314ef009ce5078dc259575bf5af223671f78b8fdae148cfd1948b0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4d84b9314d5f0d6477306668bb05b7801867b7dc22766490dadce2acd3477695\"" Jan 17 11:56:14.597983 containerd[1440]: time="2025-01-17T11:56:14.597942645Z" level=info msg="StartContainer for \"4d84b9314d5f0d6477306668bb05b7801867b7dc22766490dadce2acd3477695\"" Jan 17 11:56:14.633090 systemd[1]: Started cri-containerd-4d84b9314d5f0d6477306668bb05b7801867b7dc22766490dadce2acd3477695.scope - libcontainer container 4d84b9314d5f0d6477306668bb05b7801867b7dc22766490dadce2acd3477695. Jan 17 11:56:14.657358 containerd[1440]: time="2025-01-17T11:56:14.657308146Z" level=info msg="StartContainer for \"4d84b9314d5f0d6477306668bb05b7801867b7dc22766490dadce2acd3477695\" returns successfully" Jan 17 11:56:14.664704 kubelet[2547]: E0117 11:56:14.664646 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:14.666890 containerd[1440]: time="2025-01-17T11:56:14.665232738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bhnzp,Uid:60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3,Namespace:kube-system,Attempt:0,}" Jan 17 11:56:14.686075 containerd[1440]: time="2025-01-17T11:56:14.685945047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:56:14.686075 containerd[1440]: time="2025-01-17T11:56:14.685991967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:56:14.686075 containerd[1440]: time="2025-01-17T11:56:14.686002647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.686075 containerd[1440]: time="2025-01-17T11:56:14.686068808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:14.705112 systemd[1]: Started cri-containerd-d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a.scope - libcontainer container d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a. Jan 17 11:56:14.734658 containerd[1440]: time="2025-01-17T11:56:14.734612410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bhnzp,Uid:60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\"" Jan 17 11:56:14.735513 kubelet[2547]: E0117 11:56:14.735444 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:15.166101 kubelet[2547]: E0117 11:56:15.166070 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:15.174431 kubelet[2547]: I0117 11:56:15.174094 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxgc7" podStartSLOduration=1.174078393 podStartE2EDuration="1.174078393s" podCreationTimestamp="2025-01-17 11:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:56:15.173377587 +0000 UTC m=+17.127150429" watchObservedRunningTime="2025-01-17 11:56:15.174078393 +0000 UTC m=+17.127851235" Jan 17 11:56:21.240224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069022771.mount: Deactivated successfully. Jan 17 11:56:24.043550 containerd[1440]: time="2025-01-17T11:56:24.043502372Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:56:24.044488 containerd[1440]: time="2025-01-17T11:56:24.044271376Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651570" Jan 17 11:56:24.045438 containerd[1440]: time="2025-01-17T11:56:24.045158580Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:56:24.046747 containerd[1440]: time="2025-01-17T11:56:24.046718587Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.474321334s" Jan 17 11:56:24.046816 containerd[1440]: time="2025-01-17T11:56:24.046753388Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 11:56:24.052240 containerd[1440]: time="2025-01-17T11:56:24.052209654Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 11:56:24.055016 containerd[1440]: time="2025-01-17T11:56:24.054969427Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 11:56:24.068355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780691649.mount: Deactivated successfully. Jan 17 11:56:24.069471 containerd[1440]: time="2025-01-17T11:56:24.069409896Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\"" Jan 17 11:56:24.069999 containerd[1440]: time="2025-01-17T11:56:24.069965498Z" level=info msg="StartContainer for \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\"" Jan 17 11:56:24.097072 systemd[1]: Started cri-containerd-3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f.scope - libcontainer container 3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f. Jan 17 11:56:24.117291 containerd[1440]: time="2025-01-17T11:56:24.117000963Z" level=info msg="StartContainer for \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\" returns successfully" Jan 17 11:56:24.172012 systemd[1]: cri-containerd-3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f.scope: Deactivated successfully. Jan 17 11:56:24.215372 kubelet[2547]: E0117 11:56:24.215332 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:24.300219 containerd[1440]: time="2025-01-17T11:56:24.296357699Z" level=info msg="shim disconnected" id=3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f namespace=k8s.io Jan 17 11:56:24.300219 containerd[1440]: time="2025-01-17T11:56:24.300006277Z" level=warning msg="cleaning up after shim disconnected" id=3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f namespace=k8s.io Jan 17 11:56:24.300219 containerd[1440]: time="2025-01-17T11:56:24.300020157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:56:25.065477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f-rootfs.mount: Deactivated successfully. Jan 17 11:56:25.218492 kubelet[2547]: E0117 11:56:25.218456 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:25.221634 containerd[1440]: time="2025-01-17T11:56:25.221576132Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 11:56:25.242995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684501681.mount: Deactivated successfully. Jan 17 11:56:25.246678 containerd[1440]: time="2025-01-17T11:56:25.246635564Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\"" Jan 17 11:56:25.247080 containerd[1440]: time="2025-01-17T11:56:25.247044566Z" level=info msg="StartContainer for \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\"" Jan 17 11:56:25.270092 systemd[1]: Started cri-containerd-f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57.scope - libcontainer container f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57. Jan 17 11:56:25.292825 containerd[1440]: time="2025-01-17T11:56:25.292783291Z" level=info msg="StartContainer for \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\" returns successfully" Jan 17 11:56:25.340765 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 11:56:25.341002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:56:25.341064 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:56:25.348224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 11:56:25.348405 systemd[1]: cri-containerd-f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57.scope: Deactivated successfully. Jan 17 11:56:25.372212 containerd[1440]: time="2025-01-17T11:56:25.372149206Z" level=info msg="shim disconnected" id=f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57 namespace=k8s.io Jan 17 11:56:25.372212 containerd[1440]: time="2025-01-17T11:56:25.372208246Z" level=warning msg="cleaning up after shim disconnected" id=f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57 namespace=k8s.io Jan 17 11:56:25.372212 containerd[1440]: time="2025-01-17T11:56:25.372218126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:56:25.388587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 11:56:26.066145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57-rootfs.mount: Deactivated successfully. Jan 17 11:56:26.221685 kubelet[2547]: E0117 11:56:26.221635 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:26.225770 containerd[1440]: time="2025-01-17T11:56:26.225619404Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 11:56:26.245138 containerd[1440]: time="2025-01-17T11:56:26.245012366Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\"" Jan 17 11:56:26.245772 containerd[1440]: time="2025-01-17T11:56:26.245670008Z" level=info msg="StartContainer for \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\"" Jan 17 11:56:26.289079 systemd[1]: Started cri-containerd-5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a.scope - libcontainer container 5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a. Jan 17 11:56:26.315001 containerd[1440]: time="2025-01-17T11:56:26.313694734Z" level=info msg="StartContainer for \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\" returns successfully" Jan 17 11:56:26.322996 systemd[1]: cri-containerd-5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a.scope: Deactivated successfully. Jan 17 11:56:26.411127 containerd[1440]: time="2025-01-17T11:56:26.411057462Z" level=info msg="shim disconnected" id=5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a namespace=k8s.io Jan 17 11:56:26.411127 containerd[1440]: time="2025-01-17T11:56:26.411118183Z" level=warning msg="cleaning up after shim disconnected" id=5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a namespace=k8s.io Jan 17 11:56:26.411127 containerd[1440]: time="2025-01-17T11:56:26.411127583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:56:26.438493 containerd[1440]: time="2025-01-17T11:56:26.438404457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:56:26.439550 containerd[1440]: time="2025-01-17T11:56:26.439516942Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Jan 17 11:56:26.440417 containerd[1440]: time="2025-01-17T11:56:26.440370825Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 11:56:26.441784 containerd[1440]: time="2025-01-17T11:56:26.441746391Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.389498177s" Jan 17 11:56:26.441841 containerd[1440]: time="2025-01-17T11:56:26.441794631Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 11:56:26.445072 containerd[1440]: time="2025-01-17T11:56:26.445041165Z" level=info msg="CreateContainer within sandbox \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 11:56:26.453974 containerd[1440]: time="2025-01-17T11:56:26.453928842Z" level=info msg="CreateContainer within sandbox \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\"" Jan 17 11:56:26.457202 containerd[1440]: time="2025-01-17T11:56:26.454866286Z" level=info msg="StartContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\"" Jan 17 11:56:26.478090 systemd[1]: Started cri-containerd-eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda.scope - libcontainer container eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda. Jan 17 11:56:26.504017 containerd[1440]: time="2025-01-17T11:56:26.503907852Z" level=info msg="StartContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" returns successfully" Jan 17 11:56:27.067249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a-rootfs.mount: Deactivated successfully. Jan 17 11:56:27.228744 kubelet[2547]: E0117 11:56:27.228705 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:27.252491 kubelet[2547]: E0117 11:56:27.252444 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:27.257727 containerd[1440]: time="2025-01-17T11:56:27.257644828Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 11:56:27.316324 containerd[1440]: time="2025-01-17T11:56:27.316271419Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\"" Jan 17 11:56:27.316868 containerd[1440]: time="2025-01-17T11:56:27.316841421Z" level=info msg="StartContainer for \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\"" Jan 17 11:56:27.324875 kubelet[2547]: I0117 11:56:27.323735 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bhnzp" podStartSLOduration=1.618141804 podStartE2EDuration="13.323718088s" podCreationTimestamp="2025-01-17 11:56:14 +0000 UTC" firstStartedPulling="2025-01-17 11:56:14.73685959 +0000 UTC m=+16.690632432" lastFinishedPulling="2025-01-17 11:56:26.442435874 +0000 UTC m=+28.396208716" observedRunningTime="2025-01-17 11:56:27.256780825 +0000 UTC m=+29.210553747" watchObservedRunningTime="2025-01-17 11:56:27.323718088 +0000 UTC m=+29.277490890" Jan 17 11:56:27.359090 systemd[1]: Started cri-containerd-7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da.scope - libcontainer container 7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da. Jan 17 11:56:27.416701 systemd[1]: cri-containerd-7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da.scope: Deactivated successfully. Jan 17 11:56:27.418133 containerd[1440]: time="2025-01-17T11:56:27.417285216Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice/cri-containerd-7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da.scope/memory.events\": no such file or directory" Jan 17 11:56:27.420251 containerd[1440]: time="2025-01-17T11:56:27.420186828Z" level=info msg="StartContainer for \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\" returns successfully" Jan 17 11:56:27.444559 containerd[1440]: time="2025-01-17T11:56:27.444497684Z" level=info msg="shim disconnected" id=7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da namespace=k8s.io Jan 17 11:56:27.444559 containerd[1440]: time="2025-01-17T11:56:27.444550964Z" level=warning msg="cleaning up after shim disconnected" id=7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da namespace=k8s.io Jan 17 11:56:27.444559 containerd[1440]: time="2025-01-17T11:56:27.444561324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:56:27.692334 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:34322.service - OpenSSH per-connection server daemon (10.0.0.1:34322). Jan 17 11:56:27.738545 sshd[3249]: Accepted publickey for core from 10.0.0.1 port 34322 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:27.739884 sshd[3249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:27.743702 systemd-logind[1413]: New session 8 of user core. Jan 17 11:56:27.755089 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 11:56:27.902932 sshd[3249]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:27.905768 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:34322.service: Deactivated successfully. Jan 17 11:56:27.908587 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 11:56:27.909621 systemd-logind[1413]: Session 8 logged out. Waiting for processes to exit. Jan 17 11:56:27.911669 systemd-logind[1413]: Removed session 8. Jan 17 11:56:28.066615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da-rootfs.mount: Deactivated successfully. Jan 17 11:56:28.260148 kubelet[2547]: E0117 11:56:28.259542 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:28.260148 kubelet[2547]: E0117 11:56:28.259577 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:28.261970 containerd[1440]: time="2025-01-17T11:56:28.261932796Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 11:56:28.284198 containerd[1440]: time="2025-01-17T11:56:28.284135438Z" level=info msg="CreateContainer within sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\"" Jan 17 11:56:28.285708 containerd[1440]: time="2025-01-17T11:56:28.284833720Z" level=info msg="StartContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\"" Jan 17 11:56:28.316980 systemd[1]: Started cri-containerd-ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409.scope - libcontainer container ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409. Jan 17 11:56:28.348962 containerd[1440]: time="2025-01-17T11:56:28.348892797Z" level=info msg="StartContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" returns successfully" Jan 17 11:56:28.458205 kubelet[2547]: I0117 11:56:28.458177 2547 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 11:56:28.481131 kubelet[2547]: I0117 11:56:28.481077 2547 topology_manager.go:215] "Topology Admit Handler" podUID="49ee58de-e7f7-4832-8607-f97032988e37" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lgvh2" Jan 17 11:56:28.489581 kubelet[2547]: I0117 11:56:28.489279 2547 topology_manager.go:215] "Topology Admit Handler" podUID="ffe073c2-7b1d-4ddf-8d76-4600587e1596" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vmscc" Jan 17 11:56:28.495122 systemd[1]: Created slice kubepods-burstable-pod49ee58de_e7f7_4832_8607_f97032988e37.slice - libcontainer container kubepods-burstable-pod49ee58de_e7f7_4832_8607_f97032988e37.slice. Jan 17 11:56:28.503385 systemd[1]: Created slice kubepods-burstable-podffe073c2_7b1d_4ddf_8d76_4600587e1596.slice - libcontainer container kubepods-burstable-podffe073c2_7b1d_4ddf_8d76_4600587e1596.slice. Jan 17 11:56:28.617362 kubelet[2547]: I0117 11:56:28.617284 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffe073c2-7b1d-4ddf-8d76-4600587e1596-config-volume\") pod \"coredns-7db6d8ff4d-vmscc\" (UID: \"ffe073c2-7b1d-4ddf-8d76-4600587e1596\") " pod="kube-system/coredns-7db6d8ff4d-vmscc" Jan 17 11:56:28.617362 kubelet[2547]: I0117 11:56:28.617322 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fffh5\" (UniqueName: \"kubernetes.io/projected/ffe073c2-7b1d-4ddf-8d76-4600587e1596-kube-api-access-fffh5\") pod \"coredns-7db6d8ff4d-vmscc\" (UID: \"ffe073c2-7b1d-4ddf-8d76-4600587e1596\") " pod="kube-system/coredns-7db6d8ff4d-vmscc" Jan 17 11:56:28.617362 kubelet[2547]: I0117 11:56:28.617345 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv7nq\" (UniqueName: \"kubernetes.io/projected/49ee58de-e7f7-4832-8607-f97032988e37-kube-api-access-wv7nq\") pod \"coredns-7db6d8ff4d-lgvh2\" (UID: \"49ee58de-e7f7-4832-8607-f97032988e37\") " pod="kube-system/coredns-7db6d8ff4d-lgvh2" Jan 17 11:56:28.617362 kubelet[2547]: I0117 11:56:28.617362 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49ee58de-e7f7-4832-8607-f97032988e37-config-volume\") pod \"coredns-7db6d8ff4d-lgvh2\" (UID: \"49ee58de-e7f7-4832-8607-f97032988e37\") " pod="kube-system/coredns-7db6d8ff4d-lgvh2" Jan 17 11:56:28.800802 kubelet[2547]: E0117 11:56:28.800682 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:28.801667 containerd[1440]: time="2025-01-17T11:56:28.801622827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgvh2,Uid:49ee58de-e7f7-4832-8607-f97032988e37,Namespace:kube-system,Attempt:0,}" Jan 17 11:56:28.807905 kubelet[2547]: E0117 11:56:28.807857 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:28.808399 containerd[1440]: time="2025-01-17T11:56:28.808361772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vmscc,Uid:ffe073c2-7b1d-4ddf-8d76-4600587e1596,Namespace:kube-system,Attempt:0,}" Jan 17 11:56:29.268526 kubelet[2547]: E0117 11:56:29.268307 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:29.285670 kubelet[2547]: I0117 11:56:29.285611 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lwcjq" podStartSLOduration=5.805643263 podStartE2EDuration="15.285594346s" podCreationTimestamp="2025-01-17 11:56:14 +0000 UTC" firstStartedPulling="2025-01-17 11:56:14.571959209 +0000 UTC m=+16.525732051" lastFinishedPulling="2025-01-17 11:56:24.051910292 +0000 UTC m=+26.005683134" observedRunningTime="2025-01-17 11:56:29.284187222 +0000 UTC m=+31.237960064" watchObservedRunningTime="2025-01-17 11:56:29.285594346 +0000 UTC m=+31.239367148" Jan 17 11:56:30.265725 kubelet[2547]: E0117 11:56:30.265687 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:30.538489 systemd-networkd[1373]: cilium_host: Link UP Jan 17 11:56:30.538651 systemd-networkd[1373]: cilium_net: Link UP Jan 17 11:56:30.538807 systemd-networkd[1373]: cilium_net: Gained carrier Jan 17 11:56:30.538943 systemd-networkd[1373]: cilium_host: Gained carrier Jan 17 11:56:30.539566 systemd-networkd[1373]: cilium_net: Gained IPv6LL Jan 17 11:56:30.539966 systemd-networkd[1373]: cilium_host: Gained IPv6LL Jan 17 11:56:30.630982 systemd-networkd[1373]: cilium_vxlan: Link UP Jan 17 11:56:30.630989 systemd-networkd[1373]: cilium_vxlan: Gained carrier Jan 17 11:56:30.934958 kernel: NET: Registered PF_ALG protocol family Jan 17 11:56:31.267679 kubelet[2547]: E0117 11:56:31.267572 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:31.529866 systemd-networkd[1373]: lxc_health: Link UP Jan 17 11:56:31.534772 systemd-networkd[1373]: lxc_health: Gained carrier Jan 17 11:56:31.942022 systemd-networkd[1373]: lxcd19d85a1ee1c: Link UP Jan 17 11:56:31.952011 kernel: eth0: renamed from tmp0201a Jan 17 11:56:31.957958 systemd-networkd[1373]: lxcd19d85a1ee1c: Gained carrier Jan 17 11:56:31.959111 systemd-networkd[1373]: lxcecc645638998: Link UP Jan 17 11:56:31.970384 kernel: eth0: renamed from tmpfdd2b Jan 17 11:56:31.977337 systemd-networkd[1373]: lxcecc645638998: Gained carrier Jan 17 11:56:32.269183 kubelet[2547]: E0117 11:56:32.269075 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:32.271032 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Jan 17 11:56:32.848044 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jan 17 11:56:32.916621 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:45710.service - OpenSSH per-connection server daemon (10.0.0.1:45710). Jan 17 11:56:32.962960 sshd[3789]: Accepted publickey for core from 10.0.0.1 port 45710 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:32.964305 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:32.968818 systemd-logind[1413]: New session 9 of user core. Jan 17 11:56:32.977126 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 11:56:33.109908 sshd[3789]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:33.113331 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:45710.service: Deactivated successfully. Jan 17 11:56:33.114901 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 11:56:33.116453 systemd-logind[1413]: Session 9 logged out. Waiting for processes to exit. Jan 17 11:56:33.117254 systemd-logind[1413]: Removed session 9. Jan 17 11:56:33.271338 kubelet[2547]: E0117 11:56:33.271299 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:33.295386 systemd-networkd[1373]: lxcd19d85a1ee1c: Gained IPv6LL Jan 17 11:56:33.423287 systemd-networkd[1373]: lxcecc645638998: Gained IPv6LL Jan 17 11:56:34.272843 kubelet[2547]: E0117 11:56:34.272661 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:35.476581 containerd[1440]: time="2025-01-17T11:56:35.476471964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:56:35.476581 containerd[1440]: time="2025-01-17T11:56:35.476537244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:56:35.476581 containerd[1440]: time="2025-01-17T11:56:35.476555444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:35.477012 containerd[1440]: time="2025-01-17T11:56:35.476680284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:35.478213 containerd[1440]: time="2025-01-17T11:56:35.478038968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:56:35.478213 containerd[1440]: time="2025-01-17T11:56:35.478086128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:56:35.478213 containerd[1440]: time="2025-01-17T11:56:35.478097488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:35.478213 containerd[1440]: time="2025-01-17T11:56:35.478171648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:56:35.506125 systemd[1]: Started cri-containerd-0201aa43b2352f1195dae01b5a2230052b997cff8ccfc0fae83c616b554ae13a.scope - libcontainer container 0201aa43b2352f1195dae01b5a2230052b997cff8ccfc0fae83c616b554ae13a. Jan 17 11:56:35.507463 systemd[1]: Started cri-containerd-fdd2bd64f02d698c0b32a874765f71bcdd1c74cffc8cdc61e2959b1627c092ac.scope - libcontainer container fdd2bd64f02d698c0b32a874765f71bcdd1c74cffc8cdc61e2959b1627c092ac. Jan 17 11:56:35.518487 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 11:56:35.522556 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 11:56:35.537056 containerd[1440]: time="2025-01-17T11:56:35.537009906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vmscc,Uid:ffe073c2-7b1d-4ddf-8d76-4600587e1596,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdd2bd64f02d698c0b32a874765f71bcdd1c74cffc8cdc61e2959b1627c092ac\"" Jan 17 11:56:35.537713 kubelet[2547]: E0117 11:56:35.537687 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:35.540215 containerd[1440]: time="2025-01-17T11:56:35.540034713Z" level=info msg="CreateContainer within sandbox \"fdd2bd64f02d698c0b32a874765f71bcdd1c74cffc8cdc61e2959b1627c092ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 11:56:35.545442 containerd[1440]: time="2025-01-17T11:56:35.545413246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgvh2,Uid:49ee58de-e7f7-4832-8607-f97032988e37,Namespace:kube-system,Attempt:0,} returns sandbox id \"0201aa43b2352f1195dae01b5a2230052b997cff8ccfc0fae83c616b554ae13a\"" Jan 17 11:56:35.546163 kubelet[2547]: E0117 11:56:35.546143 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:35.549228 containerd[1440]: time="2025-01-17T11:56:35.549196335Z" level=info msg="CreateContainer within sandbox \"0201aa43b2352f1195dae01b5a2230052b997cff8ccfc0fae83c616b554ae13a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 11:56:35.561496 containerd[1440]: time="2025-01-17T11:56:35.560417801Z" level=info msg="CreateContainer within sandbox \"0201aa43b2352f1195dae01b5a2230052b997cff8ccfc0fae83c616b554ae13a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3af415cc80a8042bb3fd3c2a8d337aa42a299ca5b9264e89d9dc5656fc18424e\"" Jan 17 11:56:35.562581 containerd[1440]: time="2025-01-17T11:56:35.562549606Z" level=info msg="StartContainer for \"3af415cc80a8042bb3fd3c2a8d337aa42a299ca5b9264e89d9dc5656fc18424e\"" Jan 17 11:56:35.565383 containerd[1440]: time="2025-01-17T11:56:35.565278213Z" level=info msg="CreateContainer within sandbox \"fdd2bd64f02d698c0b32a874765f71bcdd1c74cffc8cdc61e2959b1627c092ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00bf72936e1f8fe1f69ed46d3c11af0a3c7d8ad5aebd9753155bbf3eb14eb07c\"" Jan 17 11:56:35.566696 containerd[1440]: time="2025-01-17T11:56:35.565897054Z" level=info msg="StartContainer for \"00bf72936e1f8fe1f69ed46d3c11af0a3c7d8ad5aebd9753155bbf3eb14eb07c\"" Jan 17 11:56:35.592093 systemd[1]: Started cri-containerd-3af415cc80a8042bb3fd3c2a8d337aa42a299ca5b9264e89d9dc5656fc18424e.scope - libcontainer container 3af415cc80a8042bb3fd3c2a8d337aa42a299ca5b9264e89d9dc5656fc18424e. Jan 17 11:56:35.594883 systemd[1]: Started cri-containerd-00bf72936e1f8fe1f69ed46d3c11af0a3c7d8ad5aebd9753155bbf3eb14eb07c.scope - libcontainer container 00bf72936e1f8fe1f69ed46d3c11af0a3c7d8ad5aebd9753155bbf3eb14eb07c. Jan 17 11:56:35.635872 containerd[1440]: time="2025-01-17T11:56:35.635828738Z" level=info msg="StartContainer for \"3af415cc80a8042bb3fd3c2a8d337aa42a299ca5b9264e89d9dc5656fc18424e\" returns successfully" Jan 17 11:56:35.636001 containerd[1440]: time="2025-01-17T11:56:35.635926738Z" level=info msg="StartContainer for \"00bf72936e1f8fe1f69ed46d3c11af0a3c7d8ad5aebd9753155bbf3eb14eb07c\" returns successfully" Jan 17 11:56:36.280192 kubelet[2547]: E0117 11:56:36.280012 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:36.282319 kubelet[2547]: E0117 11:56:36.281896 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:36.299730 kubelet[2547]: I0117 11:56:36.298497 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vmscc" podStartSLOduration=22.29848409 podStartE2EDuration="22.29848409s" podCreationTimestamp="2025-01-17 11:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:56:36.297790289 +0000 UTC m=+38.251563131" watchObservedRunningTime="2025-01-17 11:56:36.29848409 +0000 UTC m=+38.252256892" Jan 17 11:56:36.310979 kubelet[2547]: I0117 11:56:36.310764 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lgvh2" podStartSLOduration=22.310747037 podStartE2EDuration="22.310747037s" podCreationTimestamp="2025-01-17 11:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:56:36.310532517 +0000 UTC m=+38.264305319" watchObservedRunningTime="2025-01-17 11:56:36.310747037 +0000 UTC m=+38.264519839" Jan 17 11:56:37.283292 kubelet[2547]: E0117 11:56:37.283176 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:37.283292 kubelet[2547]: E0117 11:56:37.283238 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:38.123186 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:45726.service - OpenSSH per-connection server daemon (10.0.0.1:45726). Jan 17 11:56:38.168717 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 45726 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:38.170179 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:38.174910 systemd-logind[1413]: New session 10 of user core. Jan 17 11:56:38.183048 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 11:56:38.284222 kubelet[2547]: E0117 11:56:38.284192 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:38.284563 kubelet[2547]: E0117 11:56:38.284311 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:56:38.294085 sshd[3983]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:38.306414 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:45726.service: Deactivated successfully. Jan 17 11:56:38.309463 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 11:56:38.310988 systemd-logind[1413]: Session 10 logged out. Waiting for processes to exit. Jan 17 11:56:38.321153 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:45740.service - OpenSSH per-connection server daemon (10.0.0.1:45740). Jan 17 11:56:38.322284 systemd-logind[1413]: Removed session 10. Jan 17 11:56:38.355952 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 45740 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:38.357450 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:38.361299 systemd-logind[1413]: New session 11 of user core. Jan 17 11:56:38.372146 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 11:56:38.516539 sshd[4000]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:38.533059 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:45740.service: Deactivated successfully. Jan 17 11:56:38.537066 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 11:56:38.539067 systemd-logind[1413]: Session 11 logged out. Waiting for processes to exit. Jan 17 11:56:38.548365 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:45746.service - OpenSSH per-connection server daemon (10.0.0.1:45746). Jan 17 11:56:38.549330 systemd-logind[1413]: Removed session 11. Jan 17 11:56:38.586800 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 45746 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:38.588137 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:38.591641 systemd-logind[1413]: New session 12 of user core. Jan 17 11:56:38.602063 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 11:56:38.713064 sshd[4012]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:38.717011 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:45746.service: Deactivated successfully. Jan 17 11:56:38.720469 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 11:56:38.722834 systemd-logind[1413]: Session 12 logged out. Waiting for processes to exit. Jan 17 11:56:38.724171 systemd-logind[1413]: Removed session 12. Jan 17 11:56:43.726746 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:54162.service - OpenSSH per-connection server daemon (10.0.0.1:54162). Jan 17 11:56:43.770868 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 54162 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:43.772133 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:43.775839 systemd-logind[1413]: New session 13 of user core. Jan 17 11:56:43.788060 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 11:56:43.902252 sshd[4027]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:43.905709 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:54162.service: Deactivated successfully. Jan 17 11:56:43.908004 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 11:56:43.909580 systemd-logind[1413]: Session 13 logged out. Waiting for processes to exit. Jan 17 11:56:43.910390 systemd-logind[1413]: Removed session 13. Jan 17 11:56:48.914347 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:54174.service - OpenSSH per-connection server daemon (10.0.0.1:54174). Jan 17 11:56:48.950685 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 54174 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:48.951811 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:48.955379 systemd-logind[1413]: New session 14 of user core. Jan 17 11:56:48.965056 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 11:56:49.073137 sshd[4044]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:49.085388 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:54174.service: Deactivated successfully. Jan 17 11:56:49.088301 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 11:56:49.089826 systemd-logind[1413]: Session 14 logged out. Waiting for processes to exit. Jan 17 11:56:49.101345 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:54184.service - OpenSSH per-connection server daemon (10.0.0.1:54184). Jan 17 11:56:49.102671 systemd-logind[1413]: Removed session 14. Jan 17 11:56:49.134658 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 54184 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:49.136021 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:49.139635 systemd-logind[1413]: New session 15 of user core. Jan 17 11:56:49.148067 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 11:56:49.379332 sshd[4058]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:49.388408 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:54184.service: Deactivated successfully. Jan 17 11:56:49.390137 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 11:56:49.392350 systemd-logind[1413]: Session 15 logged out. Waiting for processes to exit. Jan 17 11:56:49.405502 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:54192.service - OpenSSH per-connection server daemon (10.0.0.1:54192). Jan 17 11:56:49.406709 systemd-logind[1413]: Removed session 15. Jan 17 11:56:49.447986 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 54192 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:49.449424 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:49.453593 systemd-logind[1413]: New session 16 of user core. Jan 17 11:56:49.470130 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 11:56:50.797550 sshd[4070]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:50.810254 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:54192.service: Deactivated successfully. Jan 17 11:56:50.812678 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 11:56:50.813787 systemd-logind[1413]: Session 16 logged out. Waiting for processes to exit. Jan 17 11:56:50.828452 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:54204.service - OpenSSH per-connection server daemon (10.0.0.1:54204). Jan 17 11:56:50.829807 systemd-logind[1413]: Removed session 16. Jan 17 11:56:50.861104 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 54204 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:50.862208 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:50.866520 systemd-logind[1413]: New session 17 of user core. Jan 17 11:56:50.879066 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 11:56:51.102123 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:51.112354 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:54204.service: Deactivated successfully. Jan 17 11:56:51.117745 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 11:56:51.121179 systemd-logind[1413]: Session 17 logged out. Waiting for processes to exit. Jan 17 11:56:51.133613 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:54214.service - OpenSSH per-connection server daemon (10.0.0.1:54214). Jan 17 11:56:51.135722 systemd-logind[1413]: Removed session 17. Jan 17 11:56:51.166174 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 54214 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:51.167468 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:51.171530 systemd-logind[1413]: New session 18 of user core. Jan 17 11:56:51.179117 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 11:56:51.285980 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:51.289465 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:54214.service: Deactivated successfully. Jan 17 11:56:51.291714 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 11:56:51.292737 systemd-logind[1413]: Session 18 logged out. Waiting for processes to exit. Jan 17 11:56:51.293624 systemd-logind[1413]: Removed session 18. Jan 17 11:56:56.296702 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:35542.service - OpenSSH per-connection server daemon (10.0.0.1:35542). Jan 17 11:56:56.334430 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 35542 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:56:56.335714 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:56:56.339216 systemd-logind[1413]: New session 19 of user core. Jan 17 11:56:56.356138 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 11:56:56.458317 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 17 11:56:56.461841 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:35542.service: Deactivated successfully. Jan 17 11:56:56.464417 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 11:56:56.465499 systemd-logind[1413]: Session 19 logged out. Waiting for processes to exit. Jan 17 11:56:56.466385 systemd-logind[1413]: Removed session 19. Jan 17 11:57:01.468707 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:35550.service - OpenSSH per-connection server daemon (10.0.0.1:35550). Jan 17 11:57:01.505088 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 35550 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:01.506381 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:01.509941 systemd-logind[1413]: New session 20 of user core. Jan 17 11:57:01.516049 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 11:57:01.618695 sshd[4138]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:01.622356 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:35550.service: Deactivated successfully. Jan 17 11:57:01.624424 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 11:57:01.625272 systemd-logind[1413]: Session 20 logged out. Waiting for processes to exit. Jan 17 11:57:01.626092 systemd-logind[1413]: Removed session 20. Jan 17 11:57:06.631176 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:34396.service - OpenSSH per-connection server daemon (10.0.0.1:34396). Jan 17 11:57:06.669722 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 34396 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:06.672279 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:06.677857 systemd-logind[1413]: New session 21 of user core. Jan 17 11:57:06.685115 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 11:57:06.795585 sshd[4152]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:06.804402 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:34396.service: Deactivated successfully. Jan 17 11:57:06.806073 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 11:57:06.807544 systemd-logind[1413]: Session 21 logged out. Waiting for processes to exit. Jan 17 11:57:06.813163 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:34408.service - OpenSSH per-connection server daemon (10.0.0.1:34408). Jan 17 11:57:06.814080 systemd-logind[1413]: Removed session 21. Jan 17 11:57:06.848369 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 34408 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:06.850135 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:06.854544 systemd-logind[1413]: New session 22 of user core. Jan 17 11:57:06.868293 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 11:57:09.205365 containerd[1440]: time="2025-01-17T11:57:09.205292743Z" level=info msg="StopContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" with timeout 30 (s)" Jan 17 11:57:09.207251 containerd[1440]: time="2025-01-17T11:57:09.206733897Z" level=info msg="Stop container \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" with signal terminated" Jan 17 11:57:09.219711 systemd[1]: cri-containerd-eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda.scope: Deactivated successfully. Jan 17 11:57:09.242557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda-rootfs.mount: Deactivated successfully. Jan 17 11:57:09.252687 containerd[1440]: time="2025-01-17T11:57:09.252645413Z" level=info msg="StopContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" with timeout 2 (s)" Jan 17 11:57:09.253459 containerd[1440]: time="2025-01-17T11:57:09.253204106Z" level=info msg="shim disconnected" id=eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda namespace=k8s.io Jan 17 11:57:09.253459 containerd[1440]: time="2025-01-17T11:57:09.253251987Z" level=warning msg="cleaning up after shim disconnected" id=eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda namespace=k8s.io Jan 17 11:57:09.253459 containerd[1440]: time="2025-01-17T11:57:09.253262307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:09.253853 containerd[1440]: time="2025-01-17T11:57:09.253675757Z" level=info msg="Stop container \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" with signal terminated" Jan 17 11:57:09.258174 containerd[1440]: time="2025-01-17T11:57:09.258126141Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 11:57:09.259731 systemd-networkd[1373]: lxc_health: Link DOWN Jan 17 11:57:09.259738 systemd-networkd[1373]: lxc_health: Lost carrier Jan 17 11:57:09.288670 systemd[1]: cri-containerd-ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409.scope: Deactivated successfully. Jan 17 11:57:09.289156 systemd[1]: cri-containerd-ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409.scope: Consumed 6.387s CPU time. Jan 17 11:57:09.292812 containerd[1440]: time="2025-01-17T11:57:09.292743712Z" level=warning msg="cleanup warnings time=\"2025-01-17T11:57:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 11:57:09.297191 containerd[1440]: time="2025-01-17T11:57:09.297151696Z" level=info msg="StopContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" returns successfully" Jan 17 11:57:09.298040 containerd[1440]: time="2025-01-17T11:57:09.297954755Z" level=info msg="StopPodSandbox for \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\"" Jan 17 11:57:09.298040 containerd[1440]: time="2025-01-17T11:57:09.297991995Z" level=info msg="Container to stop \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.299612 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a-shm.mount: Deactivated successfully. Jan 17 11:57:09.313352 systemd[1]: cri-containerd-d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a.scope: Deactivated successfully. Jan 17 11:57:09.332103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409-rootfs.mount: Deactivated successfully. Jan 17 11:57:09.337359 containerd[1440]: time="2025-01-17T11:57:09.337299917Z" level=info msg="shim disconnected" id=ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409 namespace=k8s.io Jan 17 11:57:09.337359 containerd[1440]: time="2025-01-17T11:57:09.337357918Z" level=warning msg="cleaning up after shim disconnected" id=ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409 namespace=k8s.io Jan 17 11:57:09.337527 containerd[1440]: time="2025-01-17T11:57:09.337367278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:09.337858 containerd[1440]: time="2025-01-17T11:57:09.337619364Z" level=info msg="shim disconnected" id=d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a namespace=k8s.io Jan 17 11:57:09.337858 containerd[1440]: time="2025-01-17T11:57:09.337660765Z" level=warning msg="cleaning up after shim disconnected" id=d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a namespace=k8s.io Jan 17 11:57:09.337858 containerd[1440]: time="2025-01-17T11:57:09.337668805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:09.350224 containerd[1440]: time="2025-01-17T11:57:09.350179139Z" level=warning msg="cleanup warnings time=\"2025-01-17T11:57:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 11:57:09.352879 containerd[1440]: time="2025-01-17T11:57:09.352844041Z" level=info msg="TearDown network for sandbox \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\" successfully" Jan 17 11:57:09.353013 containerd[1440]: time="2025-01-17T11:57:09.352994325Z" level=info msg="StopPodSandbox for \"d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a\" returns successfully" Jan 17 11:57:09.358197 containerd[1440]: time="2025-01-17T11:57:09.358168046Z" level=info msg="StopContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" returns successfully" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358877343Z" level=info msg="StopPodSandbox for \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\"" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358943464Z" level=info msg="Container to stop \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358957264Z" level=info msg="Container to stop \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358966825Z" level=info msg="Container to stop \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358977065Z" level=info msg="Container to stop \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.359028 containerd[1440]: time="2025-01-17T11:57:09.358986305Z" level=info msg="Container to stop \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 11:57:09.369327 systemd[1]: cri-containerd-86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6.scope: Deactivated successfully. Jan 17 11:57:09.392084 containerd[1440]: time="2025-01-17T11:57:09.392028880Z" level=info msg="shim disconnected" id=86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6 namespace=k8s.io Jan 17 11:57:09.392084 containerd[1440]: time="2025-01-17T11:57:09.392079721Z" level=warning msg="cleaning up after shim disconnected" id=86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6 namespace=k8s.io Jan 17 11:57:09.392084 containerd[1440]: time="2025-01-17T11:57:09.392088601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:09.403315 containerd[1440]: time="2025-01-17T11:57:09.403188341Z" level=info msg="TearDown network for sandbox \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" successfully" Jan 17 11:57:09.403315 containerd[1440]: time="2025-01-17T11:57:09.403227582Z" level=info msg="StopPodSandbox for \"86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6\" returns successfully" Jan 17 11:57:09.473497 kubelet[2547]: I0117 11:57:09.473281 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggk85\" (UniqueName: \"kubernetes.io/projected/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-kube-api-access-ggk85\") pod \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\" (UID: \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\") " Jan 17 11:57:09.473497 kubelet[2547]: I0117 11:57:09.473325 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-cilium-config-path\") pod \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\" (UID: \"60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3\") " Jan 17 11:57:09.480682 kubelet[2547]: I0117 11:57:09.480626 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-kube-api-access-ggk85" (OuterVolumeSpecName: "kube-api-access-ggk85") pod "60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" (UID: "60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3"). InnerVolumeSpecName "kube-api-access-ggk85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 11:57:09.480819 kubelet[2547]: I0117 11:57:09.480767 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" (UID: "60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573820 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-etc-cni-netd\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573861 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13588dc8-6163-402e-85fd-bedbe38684ff-clustermesh-secrets\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573881 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cni-path\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573897 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-hostproc\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573911 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-kernel\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.573978 kubelet[2547]: I0117 11:57:09.573948 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-run\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.573962 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-lib-modules\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.573983 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-bpf-maps\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.574001 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-hubble-tls\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.574014 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-xtables-lock\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.574016 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cni-path" (OuterVolumeSpecName: "cni-path") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574197 kubelet[2547]: I0117 11:57:09.574052 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574029 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-cgroup\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574074 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-hostproc" (OuterVolumeSpecName: "hostproc") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574090 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfs26\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-kube-api-access-xfs26\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574114 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-config-path\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574131 2547 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-net\") pod \"13588dc8-6163-402e-85fd-bedbe38684ff\" (UID: \"13588dc8-6163-402e-85fd-bedbe38684ff\") " Jan 17 11:57:09.574329 kubelet[2547]: I0117 11:57:09.574167 2547 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574178 2547 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574187 2547 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574195 2547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ggk85\" (UniqueName: \"kubernetes.io/projected/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-kube-api-access-ggk85\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574204 2547 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574089 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574456 kubelet[2547]: I0117 11:57:09.574102 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574580 kubelet[2547]: I0117 11:57:09.574110 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574580 kubelet[2547]: I0117 11:57:09.574122 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574580 kubelet[2547]: I0117 11:57:09.574222 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574580 kubelet[2547]: I0117 11:57:09.574235 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.574580 kubelet[2547]: I0117 11:57:09.574379 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 11:57:09.576288 kubelet[2547]: I0117 11:57:09.576241 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 11:57:09.576385 kubelet[2547]: I0117 11:57:09.576343 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13588dc8-6163-402e-85fd-bedbe38684ff-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 11:57:09.577059 kubelet[2547]: I0117 11:57:09.577018 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-kube-api-access-xfs26" (OuterVolumeSpecName: "kube-api-access-xfs26") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "kube-api-access-xfs26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 11:57:09.578027 kubelet[2547]: I0117 11:57:09.577993 2547 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "13588dc8-6163-402e-85fd-bedbe38684ff" (UID: "13588dc8-6163-402e-85fd-bedbe38684ff"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 11:57:09.675021 kubelet[2547]: I0117 11:57:09.674971 2547 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xfs26\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-kube-api-access-xfs26\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675021 kubelet[2547]: I0117 11:57:09.675014 2547 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675030 2547 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675048 2547 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675063 2547 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/13588dc8-6163-402e-85fd-bedbe38684ff-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675075 2547 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675083 2547 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/13588dc8-6163-402e-85fd-bedbe38684ff-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675090 2547 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675097 2547 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675140 kubelet[2547]: I0117 11:57:09.675104 2547 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:09.675307 kubelet[2547]: I0117 11:57:09.675112 2547 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/13588dc8-6163-402e-85fd-bedbe38684ff-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 11:57:10.132958 systemd[1]: Removed slice kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice - libcontainer container kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice. Jan 17 11:57:10.133239 systemd[1]: kubepods-burstable-pod13588dc8_6163_402e_85fd_bedbe38684ff.slice: Consumed 6.547s CPU time. Jan 17 11:57:10.134359 systemd[1]: Removed slice kubepods-besteffort-pod60b1dc8b_14f4_44bd_bbcb_4a44e213a7d3.slice - libcontainer container kubepods-besteffort-pod60b1dc8b_14f4_44bd_bbcb_4a44e213a7d3.slice. Jan 17 11:57:10.227908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d601803231b00e73af1b2784798274b68a8766bf646346187f63e2942b5a137a-rootfs.mount: Deactivated successfully. Jan 17 11:57:10.228021 systemd[1]: var-lib-kubelet-pods-60b1dc8b\x2d14f4\x2d44bd\x2dbbcb\x2d4a44e213a7d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dggk85.mount: Deactivated successfully. Jan 17 11:57:10.228082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6-rootfs.mount: Deactivated successfully. Jan 17 11:57:10.228133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86f6c8f6380414ae98412f2cd708abb9bd19e1e1a51ca2672dd16fd14e1605c6-shm.mount: Deactivated successfully. Jan 17 11:57:10.228190 systemd[1]: var-lib-kubelet-pods-13588dc8\x2d6163\x2d402e\x2d85fd\x2dbedbe38684ff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxfs26.mount: Deactivated successfully. Jan 17 11:57:10.228243 systemd[1]: var-lib-kubelet-pods-13588dc8\x2d6163\x2d402e\x2d85fd\x2dbedbe38684ff-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 11:57:10.228294 systemd[1]: var-lib-kubelet-pods-13588dc8\x2d6163\x2d402e\x2d85fd\x2dbedbe38684ff-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 11:57:10.349041 kubelet[2547]: I0117 11:57:10.348665 2547 scope.go:117] "RemoveContainer" containerID="eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda" Jan 17 11:57:10.351197 containerd[1440]: time="2025-01-17T11:57:10.351157056Z" level=info msg="RemoveContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\"" Jan 17 11:57:10.357507 containerd[1440]: time="2025-01-17T11:57:10.357473880Z" level=info msg="RemoveContainer for \"eed85e29b6dc563bda75871d9a858c758d3f01d4b8a73fe0a7a8bc54d6578dda\" returns successfully" Jan 17 11:57:10.357727 kubelet[2547]: I0117 11:57:10.357702 2547 scope.go:117] "RemoveContainer" containerID="ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409" Jan 17 11:57:10.359517 containerd[1440]: time="2025-01-17T11:57:10.359489806Z" level=info msg="RemoveContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\"" Jan 17 11:57:10.362296 containerd[1440]: time="2025-01-17T11:57:10.362261349Z" level=info msg="RemoveContainer for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" returns successfully" Jan 17 11:57:10.362955 kubelet[2547]: I0117 11:57:10.362429 2547 scope.go:117] "RemoveContainer" containerID="7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da" Jan 17 11:57:10.364189 containerd[1440]: time="2025-01-17T11:57:10.364162232Z" level=info msg="RemoveContainer for \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\"" Jan 17 11:57:10.373209 containerd[1440]: time="2025-01-17T11:57:10.372865351Z" level=info msg="RemoveContainer for \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\" returns successfully" Jan 17 11:57:10.373454 kubelet[2547]: I0117 11:57:10.373429 2547 scope.go:117] "RemoveContainer" containerID="5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a" Jan 17 11:57:10.374358 containerd[1440]: time="2025-01-17T11:57:10.374332944Z" level=info msg="RemoveContainer for \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\"" Jan 17 11:57:10.376463 containerd[1440]: time="2025-01-17T11:57:10.376418592Z" level=info msg="RemoveContainer for \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\" returns successfully" Jan 17 11:57:10.376701 kubelet[2547]: I0117 11:57:10.376665 2547 scope.go:117] "RemoveContainer" containerID="f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57" Jan 17 11:57:10.377634 containerd[1440]: time="2025-01-17T11:57:10.377585938Z" level=info msg="RemoveContainer for \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\"" Jan 17 11:57:10.379610 containerd[1440]: time="2025-01-17T11:57:10.379581184Z" level=info msg="RemoveContainer for \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\" returns successfully" Jan 17 11:57:10.379827 kubelet[2547]: I0117 11:57:10.379750 2547 scope.go:117] "RemoveContainer" containerID="3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f" Jan 17 11:57:10.380755 containerd[1440]: time="2025-01-17T11:57:10.380716690Z" level=info msg="RemoveContainer for \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\"" Jan 17 11:57:10.382856 containerd[1440]: time="2025-01-17T11:57:10.382820778Z" level=info msg="RemoveContainer for \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\" returns successfully" Jan 17 11:57:10.383292 kubelet[2547]: I0117 11:57:10.383124 2547 scope.go:117] "RemoveContainer" containerID="ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409" Jan 17 11:57:10.383648 containerd[1440]: time="2025-01-17T11:57:10.383611076Z" level=error msg="ContainerStatus for \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\": not found" Jan 17 11:57:10.383850 kubelet[2547]: E0117 11:57:10.383822 2547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\": not found" containerID="ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409" Jan 17 11:57:10.383954 kubelet[2547]: I0117 11:57:10.383859 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409"} err="failed to get container status \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\": rpc error: code = NotFound desc = an error occurred when try to find container \"ede7c8cfb6851f0e9d8503559d56d64ab076215e2b958ae27e7ee25df5e86409\": not found" Jan 17 11:57:10.384028 kubelet[2547]: I0117 11:57:10.383956 2547 scope.go:117] "RemoveContainer" containerID="7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da" Jan 17 11:57:10.384423 containerd[1440]: time="2025-01-17T11:57:10.384184289Z" level=error msg="ContainerStatus for \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\": not found" Jan 17 11:57:10.384501 kubelet[2547]: E0117 11:57:10.384319 2547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\": not found" containerID="7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da" Jan 17 11:57:10.384501 kubelet[2547]: I0117 11:57:10.384345 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da"} err="failed to get container status \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ca1ae9212204508b167fc070320481c0e2aec5c0df4b7955123a2dbfdfc07da\": not found" Jan 17 11:57:10.384501 kubelet[2547]: I0117 11:57:10.384361 2547 scope.go:117] "RemoveContainer" containerID="5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a" Jan 17 11:57:10.384564 containerd[1440]: time="2025-01-17T11:57:10.384529977Z" level=error msg="ContainerStatus for \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\": not found" Jan 17 11:57:10.384781 kubelet[2547]: E0117 11:57:10.384679 2547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\": not found" containerID="5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a" Jan 17 11:57:10.384781 kubelet[2547]: I0117 11:57:10.384732 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a"} err="failed to get container status \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c582c8f8c181d59598372f95ee18a63e9e3b5e4b4bcad78a606d283351c232a\": not found" Jan 17 11:57:10.384781 kubelet[2547]: I0117 11:57:10.384747 2547 scope.go:117] "RemoveContainer" containerID="f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57" Jan 17 11:57:10.385097 containerd[1440]: time="2025-01-17T11:57:10.385063469Z" level=error msg="ContainerStatus for \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\": not found" Jan 17 11:57:10.385229 kubelet[2547]: E0117 11:57:10.385194 2547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\": not found" containerID="f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57" Jan 17 11:57:10.385229 kubelet[2547]: I0117 11:57:10.385221 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57"} err="failed to get container status \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\": rpc error: code = NotFound desc = an error occurred when try to find container \"f636b41f7fd784175e2b3e256ea452c32cca00f5a80ce6d02e0275677b6d0b57\": not found" Jan 17 11:57:10.385662 kubelet[2547]: I0117 11:57:10.385237 2547 scope.go:117] "RemoveContainer" containerID="3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f" Jan 17 11:57:10.385662 kubelet[2547]: E0117 11:57:10.385535 2547 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\": not found" containerID="3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f" Jan 17 11:57:10.385662 kubelet[2547]: I0117 11:57:10.385558 2547 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f"} err="failed to get container status \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\": not found" Jan 17 11:57:10.385731 containerd[1440]: time="2025-01-17T11:57:10.385395716Z" level=error msg="ContainerStatus for \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ddae8854e3be939902ee52ed1b5e5f62cc43cd70eab8e7423f550e5ba6e3b5f\": not found" Jan 17 11:57:11.157060 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:11.169521 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:34408.service: Deactivated successfully. Jan 17 11:57:11.172282 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 11:57:11.172433 systemd[1]: session-22.scope: Consumed 1.653s CPU time. Jan 17 11:57:11.173618 systemd-logind[1413]: Session 22 logged out. Waiting for processes to exit. Jan 17 11:57:11.175033 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:34418.service - OpenSSH per-connection server daemon (10.0.0.1:34418). Jan 17 11:57:11.175692 systemd-logind[1413]: Removed session 22. Jan 17 11:57:11.211859 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 34418 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:11.213093 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:11.216530 systemd-logind[1413]: New session 23 of user core. Jan 17 11:57:11.223105 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 11:57:11.908256 sshd[4330]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:11.917121 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:34418.service: Deactivated successfully. Jan 17 11:57:11.919458 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 11:57:11.923168 kubelet[2547]: I0117 11:57:11.923112 2547 topology_manager.go:215] "Topology Admit Handler" podUID="b41f6c8c-63d6-4415-94f7-ec4746b50527" podNamespace="kube-system" podName="cilium-5q92k" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923241 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="mount-bpf-fs" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923251 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="clean-cilium-state" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923257 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="cilium-agent" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923264 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="apply-sysctl-overwrites" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923270 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="mount-cgroup" Jan 17 11:57:11.923423 kubelet[2547]: E0117 11:57:11.923276 2547 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" containerName="cilium-operator" Jan 17 11:57:11.923423 kubelet[2547]: I0117 11:57:11.923296 2547 memory_manager.go:354] "RemoveStaleState removing state" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" containerName="cilium-agent" Jan 17 11:57:11.923423 kubelet[2547]: I0117 11:57:11.923303 2547 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" containerName="cilium-operator" Jan 17 11:57:11.926074 systemd-logind[1413]: Session 23 logged out. Waiting for processes to exit. Jan 17 11:57:11.936189 systemd[1]: Started sshd@23-10.0.0.10:22-10.0.0.1:34422.service - OpenSSH per-connection server daemon (10.0.0.1:34422). Jan 17 11:57:11.939325 systemd-logind[1413]: Removed session 23. Jan 17 11:57:11.944420 systemd[1]: Created slice kubepods-burstable-podb41f6c8c_63d6_4415_94f7_ec4746b50527.slice - libcontainer container kubepods-burstable-podb41f6c8c_63d6_4415_94f7_ec4746b50527.slice. Jan 17 11:57:11.977050 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 34422 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:11.977567 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:11.981492 systemd-logind[1413]: New session 24 of user core. Jan 17 11:57:11.985070 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 11:57:12.039146 sshd[4343]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:12.048335 systemd[1]: sshd@23-10.0.0.10:22-10.0.0.1:34422.service: Deactivated successfully. Jan 17 11:57:12.049868 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 11:57:12.051635 systemd-logind[1413]: Session 24 logged out. Waiting for processes to exit. Jan 17 11:57:12.052833 systemd[1]: Started sshd@24-10.0.0.10:22-10.0.0.1:34432.service - OpenSSH per-connection server daemon (10.0.0.1:34432). Jan 17 11:57:12.053702 systemd-logind[1413]: Removed session 24. Jan 17 11:57:12.089283 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 34432 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 11:57:12.089672 kubelet[2547]: I0117 11:57:12.089638 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b41f6c8c-63d6-4415-94f7-ec4746b50527-cilium-ipsec-secrets\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089725 kubelet[2547]: I0117 11:57:12.089680 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b41f6c8c-63d6-4415-94f7-ec4746b50527-cilium-config-path\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089725 kubelet[2547]: I0117 11:57:12.089706 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvn4v\" (UniqueName: \"kubernetes.io/projected/b41f6c8c-63d6-4415-94f7-ec4746b50527-kube-api-access-rvn4v\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089780 kubelet[2547]: I0117 11:57:12.089723 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-hostproc\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089780 kubelet[2547]: I0117 11:57:12.089740 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b41f6c8c-63d6-4415-94f7-ec4746b50527-clustermesh-secrets\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089780 kubelet[2547]: I0117 11:57:12.089755 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-host-proc-sys-net\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089780 kubelet[2547]: I0117 11:57:12.089771 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-xtables-lock\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089786 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b41f6c8c-63d6-4415-94f7-ec4746b50527-hubble-tls\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089802 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-cilium-run\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089823 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-cni-path\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089841 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-etc-cni-netd\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089856 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-lib-modules\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.089887 kubelet[2547]: I0117 11:57:12.089873 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-host-proc-sys-kernel\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.090089 kubelet[2547]: I0117 11:57:12.089888 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-bpf-maps\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.090089 kubelet[2547]: I0117 11:57:12.089905 2547 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b41f6c8c-63d6-4415-94f7-ec4746b50527-cilium-cgroup\") pod \"cilium-5q92k\" (UID: \"b41f6c8c-63d6-4415-94f7-ec4746b50527\") " pod="kube-system/cilium-5q92k" Jan 17 11:57:12.090993 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 11:57:12.094926 systemd-logind[1413]: New session 25 of user core. Jan 17 11:57:12.104054 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 11:57:12.127472 kubelet[2547]: I0117 11:57:12.127428 2547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13588dc8-6163-402e-85fd-bedbe38684ff" path="/var/lib/kubelet/pods/13588dc8-6163-402e-85fd-bedbe38684ff/volumes" Jan 17 11:57:12.128018 kubelet[2547]: I0117 11:57:12.127995 2547 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3" path="/var/lib/kubelet/pods/60b1dc8b-14f4-44bd-bbcb-4a44e213a7d3/volumes" Jan 17 11:57:12.251230 kubelet[2547]: E0117 11:57:12.251087 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:12.251951 containerd[1440]: time="2025-01-17T11:57:12.251572693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q92k,Uid:b41f6c8c-63d6-4415-94f7-ec4746b50527,Namespace:kube-system,Attempt:0,}" Jan 17 11:57:12.269237 containerd[1440]: time="2025-01-17T11:57:12.268554219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 11:57:12.269237 containerd[1440]: time="2025-01-17T11:57:12.268987829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 11:57:12.269237 containerd[1440]: time="2025-01-17T11:57:12.269011989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:57:12.269237 containerd[1440]: time="2025-01-17T11:57:12.269104031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 11:57:12.285094 systemd[1]: Started cri-containerd-9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12.scope - libcontainer container 9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12. Jan 17 11:57:12.303568 containerd[1440]: time="2025-01-17T11:57:12.303532494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q92k,Uid:b41f6c8c-63d6-4415-94f7-ec4746b50527,Namespace:kube-system,Attempt:0,} returns sandbox id \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\"" Jan 17 11:57:12.304612 kubelet[2547]: E0117 11:57:12.304378 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:12.307020 containerd[1440]: time="2025-01-17T11:57:12.306992368Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 11:57:12.319824 containerd[1440]: time="2025-01-17T11:57:12.319775964Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875\"" Jan 17 11:57:12.320482 containerd[1440]: time="2025-01-17T11:57:12.320308696Z" level=info msg="StartContainer for \"f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875\"" Jan 17 11:57:12.347115 systemd[1]: Started cri-containerd-f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875.scope - libcontainer container f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875. Jan 17 11:57:12.376667 containerd[1440]: time="2025-01-17T11:57:12.376602150Z" level=info msg="StartContainer for \"f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875\" returns successfully" Jan 17 11:57:12.391160 systemd[1]: cri-containerd-f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875.scope: Deactivated successfully. Jan 17 11:57:12.424140 containerd[1440]: time="2025-01-17T11:57:12.423948612Z" level=info msg="shim disconnected" id=f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875 namespace=k8s.io Jan 17 11:57:12.424140 containerd[1440]: time="2025-01-17T11:57:12.423997173Z" level=warning msg="cleaning up after shim disconnected" id=f768a3fbb5f041e55a5bb8b1f79498bad81703ee00ba36f5a28ce62d444b0875 namespace=k8s.io Jan 17 11:57:12.424140 containerd[1440]: time="2025-01-17T11:57:12.424007693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:13.125900 kubelet[2547]: E0117 11:57:13.125867 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:13.169324 kubelet[2547]: E0117 11:57:13.169255 2547 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 11:57:13.365326 kubelet[2547]: E0117 11:57:13.365249 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:13.368551 containerd[1440]: time="2025-01-17T11:57:13.368518536Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 11:57:13.399711 containerd[1440]: time="2025-01-17T11:57:13.399595628Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df\"" Jan 17 11:57:13.401001 containerd[1440]: time="2025-01-17T11:57:13.400262722Z" level=info msg="StartContainer for \"272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df\"" Jan 17 11:57:13.436061 systemd[1]: Started cri-containerd-272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df.scope - libcontainer container 272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df. Jan 17 11:57:13.467759 containerd[1440]: time="2025-01-17T11:57:13.467686697Z" level=info msg="StartContainer for \"272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df\" returns successfully" Jan 17 11:57:13.478283 systemd[1]: cri-containerd-272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df.scope: Deactivated successfully. Jan 17 11:57:13.504470 containerd[1440]: time="2025-01-17T11:57:13.504275025Z" level=info msg="shim disconnected" id=272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df namespace=k8s.io Jan 17 11:57:13.504470 containerd[1440]: time="2025-01-17T11:57:13.504326866Z" level=warning msg="cleaning up after shim disconnected" id=272629a21f12fd13faba6fde437b05e3a36ab17ea69c0e9e3ab66643a85f17df namespace=k8s.io Jan 17 11:57:13.504470 containerd[1440]: time="2025-01-17T11:57:13.504334986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:14.369108 kubelet[2547]: E0117 11:57:14.368935 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:14.375670 containerd[1440]: time="2025-01-17T11:57:14.375393219Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 11:57:14.400281 containerd[1440]: time="2025-01-17T11:57:14.400220966Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf\"" Jan 17 11:57:14.401754 containerd[1440]: time="2025-01-17T11:57:14.401722677Z" level=info msg="StartContainer for \"f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf\"" Jan 17 11:57:14.437070 systemd[1]: Started cri-containerd-f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf.scope - libcontainer container f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf. Jan 17 11:57:14.459501 containerd[1440]: time="2025-01-17T11:57:14.459445576Z" level=info msg="StartContainer for \"f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf\" returns successfully" Jan 17 11:57:14.459953 systemd[1]: cri-containerd-f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf.scope: Deactivated successfully. Jan 17 11:57:14.482334 containerd[1440]: time="2025-01-17T11:57:14.482245322Z" level=info msg="shim disconnected" id=f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf namespace=k8s.io Jan 17 11:57:14.482334 containerd[1440]: time="2025-01-17T11:57:14.482311323Z" level=warning msg="cleaning up after shim disconnected" id=f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf namespace=k8s.io Jan 17 11:57:14.482334 containerd[1440]: time="2025-01-17T11:57:14.482319803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:15.196455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f001f1b0008809a4173151ec3bc7b53f65a36d11aa77882d63bdb5f68fedd2cf-rootfs.mount: Deactivated successfully. Jan 17 11:57:15.372208 kubelet[2547]: E0117 11:57:15.372179 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:15.374407 containerd[1440]: time="2025-01-17T11:57:15.374199136Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 11:57:15.386295 containerd[1440]: time="2025-01-17T11:57:15.386229695Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab\"" Jan 17 11:57:15.387600 containerd[1440]: time="2025-01-17T11:57:15.387107793Z" level=info msg="StartContainer for \"2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab\"" Jan 17 11:57:15.407083 systemd[1]: run-containerd-runc-k8s.io-2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab-runc.vNJcxH.mount: Deactivated successfully. Jan 17 11:57:15.419081 systemd[1]: Started cri-containerd-2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab.scope - libcontainer container 2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab. Jan 17 11:57:15.439133 systemd[1]: cri-containerd-2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab.scope: Deactivated successfully. Jan 17 11:57:15.442237 containerd[1440]: time="2025-01-17T11:57:15.442198128Z" level=info msg="StartContainer for \"2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab\" returns successfully" Jan 17 11:57:15.461138 containerd[1440]: time="2025-01-17T11:57:15.460886779Z" level=info msg="shim disconnected" id=2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab namespace=k8s.io Jan 17 11:57:15.461138 containerd[1440]: time="2025-01-17T11:57:15.460952861Z" level=warning msg="cleaning up after shim disconnected" id=2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab namespace=k8s.io Jan 17 11:57:15.461138 containerd[1440]: time="2025-01-17T11:57:15.460961741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 11:57:16.196527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2de6c83bfbe38baddb66eb1f0498bca8de07c5365cd8fe5db607a97b1ebe5cab-rootfs.mount: Deactivated successfully. Jan 17 11:57:16.376223 kubelet[2547]: E0117 11:57:16.376175 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:16.378955 containerd[1440]: time="2025-01-17T11:57:16.378906348Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 11:57:16.394823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885562405.mount: Deactivated successfully. Jan 17 11:57:16.401477 containerd[1440]: time="2025-01-17T11:57:16.401429663Z" level=info msg="CreateContainer within sandbox \"9539147d5b6d119a6d01d178fda0305ae656247ef98223e3e4641c3a72594f12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c\"" Jan 17 11:57:16.402155 containerd[1440]: time="2025-01-17T11:57:16.402115477Z" level=info msg="StartContainer for \"2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c\"" Jan 17 11:57:16.433116 systemd[1]: Started cri-containerd-2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c.scope - libcontainer container 2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c. Jan 17 11:57:16.455680 containerd[1440]: time="2025-01-17T11:57:16.455571471Z" level=info msg="StartContainer for \"2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c\" returns successfully" Jan 17 11:57:16.722952 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 11:57:17.381459 kubelet[2547]: E0117 11:57:17.381106 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:17.394525 kubelet[2547]: I0117 11:57:17.394467 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5q92k" podStartSLOduration=6.394452955 podStartE2EDuration="6.394452955s" podCreationTimestamp="2025-01-17 11:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 11:57:17.394387113 +0000 UTC m=+79.348159955" watchObservedRunningTime="2025-01-17 11:57:17.394452955 +0000 UTC m=+79.348225797" Jan 17 11:57:18.128144 kubelet[2547]: E0117 11:57:18.127713 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:18.382423 kubelet[2547]: E0117 11:57:18.382312 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:18.475044 systemd[1]: run-containerd-runc-k8s.io-2cc4aa45ed679bf6ea2e177dc4c12397c07350e9df69fe807fa38a74ddf2c49c-runc.z7bL7l.mount: Deactivated successfully. Jan 17 11:57:19.384222 kubelet[2547]: E0117 11:57:19.384187 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:19.568707 systemd-networkd[1373]: lxc_health: Link UP Jan 17 11:57:19.572694 systemd-networkd[1373]: lxc_health: Gained carrier Jan 17 11:57:20.385662 kubelet[2547]: E0117 11:57:20.385601 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:20.783079 systemd-networkd[1373]: lxc_health: Gained IPv6LL Jan 17 11:57:21.389282 kubelet[2547]: E0117 11:57:21.389251 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:22.390591 kubelet[2547]: E0117 11:57:22.390546 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:24.126498 kubelet[2547]: E0117 11:57:24.126030 2547 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 11:57:24.871368 sshd[4351]: pam_unix(sshd:session): session closed for user core Jan 17 11:57:24.875542 systemd[1]: sshd@24-10.0.0.10:22-10.0.0.1:34432.service: Deactivated successfully. Jan 17 11:57:24.877268 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 11:57:24.878029 systemd-logind[1413]: Session 25 logged out. Waiting for processes to exit. Jan 17 11:57:24.879222 systemd-logind[1413]: Removed session 25.