Feb 13 19:09:37.938947 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:09:37.938969 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:09:37.938978 kernel: KASLR enabled Feb 13 19:09:37.938984 kernel: efi: EFI v2.7 by EDK II Feb 13 19:09:37.938990 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:09:37.938995 kernel: random: crng init done Feb 13 19:09:37.939002 kernel: secureboot: Secure boot disabled Feb 13 19:09:37.939008 kernel: ACPI: Early table checksum verification disabled Feb 13 19:09:37.939014 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:09:37.939021 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:09:37.939027 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939033 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939039 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939046 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939053 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939060 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939066 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939073 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939079 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:09:37.939085 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:09:37.939091 kernel: NUMA: Failed to initialise from firmware Feb 13 19:09:37.939097 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:37.939103 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:09:37.939110 kernel: Zone ranges: Feb 13 19:09:37.939115 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:37.939123 kernel: DMA32 empty Feb 13 19:09:37.939138 kernel: Normal empty Feb 13 19:09:37.939144 kernel: Movable zone start for each node Feb 13 19:09:37.939184 kernel: Early memory node ranges Feb 13 19:09:37.939191 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:09:37.939197 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:09:37.939203 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:09:37.939209 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:09:37.939215 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:09:37.939221 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:09:37.939227 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:09:37.939233 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:09:37.939242 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:09:37.939248 kernel: psci: probing for conduit method from ACPI. Feb 13 19:09:37.939255 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:09:37.939263 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:09:37.939270 kernel: psci: Trusted OS migration not required Feb 13 19:09:37.939277 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:09:37.939284 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:09:37.939291 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:09:37.939305 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:09:37.939312 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:09:37.939318 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:09:37.939325 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:09:37.939332 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:09:37.939338 kernel: CPU features: detected: Spectre-v4 Feb 13 19:09:37.939345 kernel: CPU features: detected: Spectre-BHB Feb 13 19:09:37.939351 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:09:37.939360 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:09:37.939366 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:09:37.939373 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:09:37.939379 kernel: alternatives: applying boot alternatives Feb 13 19:09:37.939390 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:09:37.939397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:09:37.939404 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:09:37.939410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:09:37.939417 kernel: Fallback order for Node 0: 0 Feb 13 19:09:37.939423 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:09:37.939429 kernel: Policy zone: DMA Feb 13 19:09:37.939437 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:09:37.939444 kernel: software IO TLB: area num 4. Feb 13 19:09:37.939450 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:09:37.939457 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 19:09:37.939464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:09:37.939470 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:09:37.939477 kernel: rcu: RCU event tracing is enabled. Feb 13 19:09:37.939484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:09:37.939491 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:09:37.939497 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:09:37.939504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:09:37.939511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:09:37.939518 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:09:37.939525 kernel: GICv3: 256 SPIs implemented Feb 13 19:09:37.939531 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:09:37.939537 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:09:37.939544 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:09:37.939550 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:09:37.939557 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:09:37.939563 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:09:37.939570 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:09:37.939576 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:09:37.939583 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:09:37.939590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:09:37.939597 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:37.939604 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:09:37.939613 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:09:37.939622 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:09:37.939630 kernel: arm-pv: using stolen time PV Feb 13 19:09:37.939638 kernel: Console: colour dummy device 80x25 Feb 13 19:09:37.939644 kernel: ACPI: Core revision 20230628 Feb 13 19:09:37.939651 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:09:37.939658 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:09:37.939666 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:09:37.939672 kernel: landlock: Up and running. Feb 13 19:09:37.939679 kernel: SELinux: Initializing. Feb 13 19:09:37.939689 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:09:37.939696 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:09:37.939703 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:09:37.939710 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:09:37.939717 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:09:37.939724 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:09:37.939730 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:09:37.939739 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:09:37.939745 kernel: Remapping and enabling EFI services. Feb 13 19:09:37.939752 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:09:37.939759 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:09:37.939765 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:09:37.939772 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:09:37.939779 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:37.939786 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:09:37.939792 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:09:37.939800 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:09:37.939807 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:09:37.939818 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:37.939827 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:09:37.939833 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:09:37.939841 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:09:37.939848 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:09:37.939854 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:09:37.939862 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:09:37.939870 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:09:37.939877 kernel: SMP: Total of 4 processors activated. Feb 13 19:09:37.939884 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:09:37.939891 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:09:37.939898 kernel: CPU features: detected: Common not Private translations Feb 13 19:09:37.939905 kernel: CPU features: detected: CRC32 instructions Feb 13 19:09:37.939912 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:09:37.939919 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:09:37.939927 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:09:37.939934 kernel: CPU features: detected: Privileged Access Never Feb 13 19:09:37.939941 kernel: CPU features: detected: RAS Extension Support Feb 13 19:09:37.939948 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:09:37.939956 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:09:37.939963 kernel: alternatives: applying system-wide alternatives Feb 13 19:09:37.939970 kernel: devtmpfs: initialized Feb 13 19:09:37.939977 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:09:37.939984 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:09:37.939993 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:09:37.940000 kernel: SMBIOS 3.0.0 present. Feb 13 19:09:37.940007 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:09:37.940014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:09:37.940022 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:09:37.940029 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:09:37.940036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:09:37.940043 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:09:37.940051 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:09:37.940059 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:09:37.940066 kernel: cpuidle: using governor menu Feb 13 19:09:37.940073 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:09:37.940081 kernel: ASID allocator initialised with 32768 entries Feb 13 19:09:37.940088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:09:37.940095 kernel: Serial: AMBA PL011 UART driver Feb 13 19:09:37.940102 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:09:37.940109 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:09:37.940116 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:09:37.940125 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:09:37.940132 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:09:37.940139 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:09:37.940171 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:09:37.940180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:09:37.940187 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:09:37.940195 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:09:37.940202 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:09:37.940209 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:09:37.940218 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:09:37.940225 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:09:37.940233 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:09:37.940240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:09:37.940247 kernel: ACPI: Interpreter enabled Feb 13 19:09:37.940254 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:09:37.940262 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:09:37.940269 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:09:37.940276 kernel: printk: console [ttyAMA0] enabled Feb 13 19:09:37.940285 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:09:37.940429 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:09:37.940504 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:09:37.940570 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:09:37.940635 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:09:37.940699 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:09:37.940709 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:09:37.940719 kernel: PCI host bridge to bus 0000:00 Feb 13 19:09:37.940812 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:09:37.940879 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:09:37.940939 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:09:37.940998 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:09:37.941080 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:09:37.941184 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:09:37.941259 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:09:37.941347 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:09:37.941420 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:09:37.941491 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:09:37.941560 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:09:37.941638 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:09:37.941701 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:09:37.941785 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:09:37.941865 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:09:37.941876 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:09:37.941889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:09:37.941896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:09:37.941903 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:09:37.941914 kernel: iommu: Default domain type: Translated Feb 13 19:09:37.941922 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:09:37.941931 kernel: efivars: Registered efivars operations Feb 13 19:09:37.941938 kernel: vgaarb: loaded Feb 13 19:09:37.941946 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:09:37.941961 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:09:37.941968 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:09:37.941976 kernel: pnp: PnP ACPI init Feb 13 19:09:37.942056 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:09:37.942067 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:09:37.942076 kernel: NET: Registered PF_INET protocol family Feb 13 19:09:37.942083 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:09:37.942090 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:09:37.942098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:09:37.942105 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:09:37.942112 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:09:37.942119 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:09:37.942126 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:09:37.942134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:09:37.942142 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:09:37.942163 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:09:37.942170 kernel: kvm [1]: HYP mode not available Feb 13 19:09:37.942178 kernel: Initialise system trusted keyrings Feb 13 19:09:37.942185 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:09:37.942192 kernel: Key type asymmetric registered Feb 13 19:09:37.942199 kernel: Asymmetric key parser 'x509' registered Feb 13 19:09:37.942206 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:09:37.942213 kernel: io scheduler mq-deadline registered Feb 13 19:09:37.942222 kernel: io scheduler kyber registered Feb 13 19:09:37.942229 kernel: io scheduler bfq registered Feb 13 19:09:37.942236 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:09:37.942243 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:09:37.942251 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:09:37.942331 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:09:37.942341 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:09:37.942348 kernel: thunder_xcv, ver 1.0 Feb 13 19:09:37.942355 kernel: thunder_bgx, ver 1.0 Feb 13 19:09:37.942364 kernel: nicpf, ver 1.0 Feb 13 19:09:37.942371 kernel: nicvf, ver 1.0 Feb 13 19:09:37.942448 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:09:37.942510 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:09:37 UTC (1739473777) Feb 13 19:09:37.942519 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:09:37.942527 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:09:37.942534 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:09:37.942541 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:09:37.942550 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:09:37.942558 kernel: Segment Routing with IPv6 Feb 13 19:09:37.942565 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:09:37.942572 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:09:37.942579 kernel: Key type dns_resolver registered Feb 13 19:09:37.942586 kernel: registered taskstats version 1 Feb 13 19:09:37.942593 kernel: Loading compiled-in X.509 certificates Feb 13 19:09:37.942600 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:09:37.942607 kernel: Key type .fscrypt registered Feb 13 19:09:37.942615 kernel: Key type fscrypt-provisioning registered Feb 13 19:09:37.942623 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:09:37.942630 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:09:37.942641 kernel: ima: No architecture policies found Feb 13 19:09:37.942650 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:09:37.942657 kernel: clk: Disabling unused clocks Feb 13 19:09:37.942667 kernel: Freeing unused kernel memory: 39680K Feb 13 19:09:37.942674 kernel: Run /init as init process Feb 13 19:09:37.942681 kernel: with arguments: Feb 13 19:09:37.942690 kernel: /init Feb 13 19:09:37.942697 kernel: with environment: Feb 13 19:09:37.942703 kernel: HOME=/ Feb 13 19:09:37.942710 kernel: TERM=linux Feb 13 19:09:37.942717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:09:37.942726 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:09:37.942735 systemd[1]: Detected virtualization kvm. Feb 13 19:09:37.942743 systemd[1]: Detected architecture arm64. Feb 13 19:09:37.942752 systemd[1]: Running in initrd. Feb 13 19:09:37.942759 systemd[1]: No hostname configured, using default hostname. Feb 13 19:09:37.942767 systemd[1]: Hostname set to . Feb 13 19:09:37.942774 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:09:37.942782 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:09:37.942790 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:37.942797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:37.942805 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:09:37.942815 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:09:37.942822 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:09:37.942830 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:09:37.942839 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:09:37.942848 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:09:37.942856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:37.942864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:37.942873 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:09:37.942881 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:09:37.942889 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:09:37.942897 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:09:37.942905 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:09:37.942913 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:09:37.942921 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:09:37.942929 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:09:37.942938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:37.942946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:37.942954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:37.942962 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:09:37.942970 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:09:37.942977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:09:37.942988 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:09:37.942996 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:09:37.943007 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:09:37.943016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:09:37.943027 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:37.943037 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:09:37.943046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:37.943054 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:09:37.943065 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:09:37.943092 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:09:37.943111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:37.943121 systemd-journald[239]: Journal started Feb 13 19:09:37.943140 systemd-journald[239]: Runtime Journal (/run/log/journal/1eafcee237f24169b52d14418b3b6219) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:09:37.941931 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:09:37.946658 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:09:37.947430 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:09:37.950659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:37.953289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:09:37.955287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:09:37.962206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:09:37.963758 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:09:37.964731 kernel: Bridge firewalling registered Feb 13 19:09:37.964597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:37.967304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:09:37.968440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:37.970461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:37.979531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:37.981965 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:09:37.984591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:37.986560 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:09:38.000871 dracut-cmdline[278]: dracut-dracut-053 Feb 13 19:09:38.003734 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:09:38.017467 systemd-resolved[275]: Positive Trust Anchors: Feb 13 19:09:38.019681 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:09:38.019718 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:09:38.024544 systemd-resolved[275]: Defaulting to hostname 'linux'. Feb 13 19:09:38.025477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:09:38.028308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:38.083175 kernel: SCSI subsystem initialized Feb 13 19:09:38.088169 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:09:38.095187 kernel: iscsi: registered transport (tcp) Feb 13 19:09:38.111174 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:09:38.111203 kernel: QLogic iSCSI HBA Driver Feb 13 19:09:38.156967 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:09:38.163291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:09:38.183841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:09:38.183892 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:09:38.183920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:09:38.246194 kernel: raid6: neonx8 gen() 11266 MB/s Feb 13 19:09:38.263189 kernel: raid6: neonx4 gen() 14674 MB/s Feb 13 19:09:38.280177 kernel: raid6: neonx2 gen() 13209 MB/s Feb 13 19:09:38.297171 kernel: raid6: neonx1 gen() 10483 MB/s Feb 13 19:09:38.314182 kernel: raid6: int64x8 gen() 6004 MB/s Feb 13 19:09:38.331207 kernel: raid6: int64x4 gen() 7334 MB/s Feb 13 19:09:38.348209 kernel: raid6: int64x2 gen() 5683 MB/s Feb 13 19:09:38.365266 kernel: raid6: int64x1 gen() 5046 MB/s Feb 13 19:09:38.365281 kernel: raid6: using algorithm neonx4 gen() 14674 MB/s Feb 13 19:09:38.383233 kernel: raid6: .... xor() 12087 MB/s, rmw enabled Feb 13 19:09:38.383245 kernel: raid6: using neon recovery algorithm Feb 13 19:09:38.388169 kernel: xor: measuring software checksum speed Feb 13 19:09:38.389446 kernel: 8regs : 17440 MB/sec Feb 13 19:09:38.389460 kernel: 32regs : 19669 MB/sec Feb 13 19:09:38.390747 kernel: arm64_neon : 26945 MB/sec Feb 13 19:09:38.390768 kernel: xor: using function: arm64_neon (26945 MB/sec) Feb 13 19:09:38.442183 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:09:38.452788 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:09:38.467364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:38.479067 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 19:09:38.482259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:38.485678 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:09:38.499782 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Feb 13 19:09:38.527225 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:09:38.540351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:09:38.579648 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:38.589403 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:09:38.599569 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:09:38.601854 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:09:38.603366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:38.605436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:09:38.611443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:09:38.623723 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:09:38.630642 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:09:38.647696 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:09:38.647804 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:09:38.647816 kernel: GPT:9289727 != 19775487 Feb 13 19:09:38.647837 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:09:38.647855 kernel: GPT:9289727 != 19775487 Feb 13 19:09:38.647864 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:09:38.647873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:38.632318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:09:38.632426 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:38.640718 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:38.647450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:09:38.648262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:38.650193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:38.662388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:38.670914 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Feb 13 19:09:38.673168 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (512) Feb 13 19:09:38.679303 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:09:38.680781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:38.686633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:09:38.693828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:09:38.697701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:09:38.698880 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:09:38.709351 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:09:38.711067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:09:38.719160 disk-uuid[553]: Primary Header is updated. Feb 13 19:09:38.719160 disk-uuid[553]: Secondary Entries is updated. Feb 13 19:09:38.719160 disk-uuid[553]: Secondary Header is updated. Feb 13 19:09:38.727173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:38.731179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:38.731419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:39.732190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:09:39.733174 disk-uuid[554]: The operation has completed successfully. Feb 13 19:09:39.755980 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:09:39.756099 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:09:39.770312 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:09:39.774135 sh[574]: Success Feb 13 19:09:39.787183 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:09:39.820522 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:09:39.822246 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:09:39.823214 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:09:39.834614 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:09:39.834647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:39.835779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:09:39.835793 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:09:39.837170 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:09:39.841554 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:09:39.842501 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:09:39.843158 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:09:39.846334 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:09:39.855686 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:09:39.855721 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:39.855731 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:39.858309 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:39.864796 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:09:39.867170 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:09:39.872197 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:09:39.878301 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:09:39.940228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:09:39.953349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:09:39.966883 ignition[675]: Ignition 2.20.0 Feb 13 19:09:39.966892 ignition[675]: Stage: fetch-offline Feb 13 19:09:39.966923 ignition[675]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:39.966932 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:39.967081 ignition[675]: parsed url from cmdline: "" Feb 13 19:09:39.967084 ignition[675]: no config URL provided Feb 13 19:09:39.967088 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:09:39.967095 ignition[675]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:09:39.967120 ignition[675]: op(1): [started] loading QEMU firmware config module Feb 13 19:09:39.967126 ignition[675]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:09:39.976183 ignition[675]: op(1): [finished] loading QEMU firmware config module Feb 13 19:09:39.983979 systemd-networkd[763]: lo: Link UP Feb 13 19:09:39.983988 systemd-networkd[763]: lo: Gained carrier Feb 13 19:09:39.986425 systemd-networkd[763]: Enumeration completed Feb 13 19:09:39.986529 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:09:39.986841 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:39.986844 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:09:39.987728 systemd-networkd[763]: eth0: Link UP Feb 13 19:09:39.987731 systemd-networkd[763]: eth0: Gained carrier Feb 13 19:09:39.987737 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:39.988062 systemd[1]: Reached target network.target - Network. Feb 13 19:09:40.022220 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:09:40.023211 ignition[675]: parsing config with SHA512: 692cb3340ab8d5eab5bd7f71c45fea5cbd2c42ba626c0dc405cd223b068478df7c3943cf0ac06b82e5330261a35d3e7b88e5703e00254cbfaa0f76951ced3c69 Feb 13 19:09:40.029659 unknown[675]: fetched base config from "system" Feb 13 19:09:40.030270 unknown[675]: fetched user config from "qemu" Feb 13 19:09:40.030741 ignition[675]: fetch-offline: fetch-offline passed Feb 13 19:09:40.030817 ignition[675]: Ignition finished successfully Feb 13 19:09:40.032057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:09:40.035425 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:09:40.054334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:09:40.064180 ignition[769]: Ignition 2.20.0 Feb 13 19:09:40.064191 ignition[769]: Stage: kargs Feb 13 19:09:40.064354 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:40.064363 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:40.067613 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:09:40.065212 ignition[769]: kargs: kargs passed Feb 13 19:09:40.069871 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:09:40.065253 ignition[769]: Ignition finished successfully Feb 13 19:09:40.082797 ignition[778]: Ignition 2.20.0 Feb 13 19:09:40.082807 ignition[778]: Stage: disks Feb 13 19:09:40.082962 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:40.082971 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:40.085192 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:09:40.083850 ignition[778]: disks: disks passed Feb 13 19:09:40.087056 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:09:40.083892 ignition[778]: Ignition finished successfully Feb 13 19:09:40.088721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:09:40.090297 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:09:40.092077 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:09:40.093721 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:09:40.107317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:09:40.122946 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:09:40.168001 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:09:40.177322 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:09:40.217128 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:09:40.218625 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:09:40.218378 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:09:40.237313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:09:40.239851 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:09:40.240998 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:09:40.241042 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:09:40.241063 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:09:40.247204 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:09:40.251250 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 19:09:40.251274 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:09:40.251285 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:40.250368 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:09:40.255653 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:40.255676 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:40.257248 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:09:40.291586 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:09:40.295869 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:09:40.299796 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:09:40.302618 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:09:40.374786 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:09:40.388261 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:09:40.390659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:09:40.395182 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:09:40.408404 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:09:40.413201 ignition[911]: INFO : Ignition 2.20.0 Feb 13 19:09:40.413201 ignition[911]: INFO : Stage: mount Feb 13 19:09:40.415193 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:40.415193 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:40.415193 ignition[911]: INFO : mount: mount passed Feb 13 19:09:40.415193 ignition[911]: INFO : Ignition finished successfully Feb 13 19:09:40.415662 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:09:40.428285 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:09:40.833711 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:09:40.842322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:09:40.847170 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) Feb 13 19:09:40.849584 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:09:40.849606 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:09:40.850263 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:09:40.852165 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:09:40.853286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:09:40.867929 ignition[941]: INFO : Ignition 2.20.0 Feb 13 19:09:40.867929 ignition[941]: INFO : Stage: files Feb 13 19:09:40.869552 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:40.869552 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:40.869552 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:09:40.873012 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:09:40.873012 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:09:40.873012 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:09:40.873012 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:09:40.873012 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:09:40.873012 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:09:40.871985 unknown[941]: wrote ssh authorized keys file for user: core Feb 13 19:09:40.882047 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:09:40.915265 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:09:41.227871 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:09:41.227871 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:09:41.231576 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:09:41.278552 systemd-networkd[763]: eth0: Gained IPv6LL Feb 13 19:09:41.560457 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:09:41.631208 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:09:41.633083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:09:41.852120 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:09:42.089067 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:09:42.089067 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:09:42.092774 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:09:42.113070 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:09:42.116577 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:09:42.118093 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:09:42.118093 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:09:42.118093 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:09:42.118093 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:09:42.118093 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:09:42.118093 ignition[941]: INFO : files: files passed Feb 13 19:09:42.118093 ignition[941]: INFO : Ignition finished successfully Feb 13 19:09:42.119663 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:09:42.131387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:09:42.133954 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:09:42.136009 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:09:42.136085 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:09:42.141215 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:09:42.144739 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:42.146301 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:42.147761 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:09:42.149131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:09:42.150491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:09:42.162357 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:09:42.180899 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:09:42.181003 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:09:42.183179 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:09:42.185012 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:09:42.186808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:09:42.187538 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:09:42.202120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:09:42.204419 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:09:42.214688 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:42.215895 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:42.217918 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:09:42.219678 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:09:42.219786 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:09:42.222244 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:09:42.224215 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:09:42.225849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:09:42.227629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:09:42.229621 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:09:42.231540 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:09:42.233336 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:09:42.235249 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:09:42.237202 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:09:42.238925 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:09:42.240448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:09:42.240572 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:09:42.242913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:42.244878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:42.246785 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:09:42.250240 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:42.252801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:09:42.252923 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:09:42.255621 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:09:42.255738 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:09:42.257836 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:09:42.259532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:09:42.265212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:42.267790 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:09:42.268774 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:09:42.270312 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:09:42.270400 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:09:42.271908 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:09:42.271988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:09:42.273533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:09:42.273638 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:09:42.275404 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:09:42.275502 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:09:42.286301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:09:42.287803 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:09:42.288819 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:09:42.288953 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:42.290883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:09:42.290981 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:09:42.297020 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:09:42.297790 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:09:42.300078 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:09:42.301643 ignition[995]: INFO : Ignition 2.20.0 Feb 13 19:09:42.301643 ignition[995]: INFO : Stage: umount Feb 13 19:09:42.301643 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:09:42.301643 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:09:42.305278 ignition[995]: INFO : umount: umount passed Feb 13 19:09:42.305278 ignition[995]: INFO : Ignition finished successfully Feb 13 19:09:42.304268 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:09:42.304371 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:09:42.306445 systemd[1]: Stopped target network.target - Network. Feb 13 19:09:42.307802 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:09:42.307860 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:09:42.309631 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:09:42.309678 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:09:42.311357 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:09:42.311396 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:09:42.313060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:09:42.313102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:09:42.314848 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:09:42.316622 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:09:42.325192 systemd-networkd[763]: eth0: DHCPv6 lease lost Feb 13 19:09:42.326408 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:09:42.326510 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:09:42.329239 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:09:42.329377 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:09:42.333963 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:09:42.334013 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:42.349262 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:09:42.350175 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:09:42.350246 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:09:42.352347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:09:42.352393 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:42.354222 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:09:42.354267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:42.356554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:09:42.356599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:42.358508 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:42.365646 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:09:42.365744 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:09:42.367758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:09:42.367830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:09:42.370655 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:09:42.370740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:09:42.381856 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:09:42.381992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:42.384262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:09:42.384312 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:42.386185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:09:42.386218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:42.388041 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:09:42.388086 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:09:42.390742 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:09:42.390786 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:09:42.393368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:09:42.393413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:09:42.407348 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:09:42.408420 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:09:42.408477 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:42.410564 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:09:42.410608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:09:42.412573 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:09:42.412614 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:42.414741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:09:42.414785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:42.416934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:09:42.418200 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:09:42.420355 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:09:42.422722 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:09:42.431274 systemd[1]: Switching root. Feb 13 19:09:42.465141 systemd-journald[239]: Journal stopped Feb 13 19:09:43.183633 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:09:43.183690 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:09:43.183702 kernel: SELinux: policy capability open_perms=1 Feb 13 19:09:43.183711 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:09:43.183720 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:09:43.183729 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:09:43.183738 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:09:43.183752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:09:43.183765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:09:43.183777 kernel: audit: type=1403 audit(1739473782.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:09:43.183787 systemd[1]: Successfully loaded SELinux policy in 31.098ms. Feb 13 19:09:43.183808 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.046ms. Feb 13 19:09:43.183820 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:09:43.183832 systemd[1]: Detected virtualization kvm. Feb 13 19:09:43.183842 systemd[1]: Detected architecture arm64. Feb 13 19:09:43.183852 systemd[1]: Detected first boot. Feb 13 19:09:43.183862 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:09:43.183872 zram_generator::config[1040]: No configuration found. Feb 13 19:09:43.183885 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:09:43.183896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:09:43.183906 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:09:43.183916 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:09:43.183926 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:09:43.183937 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:09:43.183947 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:09:43.183957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:09:43.183969 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:09:43.183979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:09:43.183989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:09:43.184000 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:09:43.184009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:09:43.184020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:09:43.184031 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:09:43.184042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:09:43.184053 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:09:43.184064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:09:43.184075 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:09:43.184084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:09:43.184094 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:09:43.184104 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:09:43.184114 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:09:43.184124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:09:43.184135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:09:43.184169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:09:43.184183 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:09:43.184193 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:09:43.184204 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:09:43.184214 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:09:43.184225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:09:43.184235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:09:43.184245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:09:43.184255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:09:43.184267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:09:43.184279 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:09:43.184293 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:09:43.184305 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:09:43.184315 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:09:43.184324 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:09:43.184335 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:09:43.184345 systemd[1]: Reached target machines.target - Containers. Feb 13 19:09:43.184357 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:09:43.184368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:43.184378 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:09:43.184388 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:09:43.184398 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:43.184407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:09:43.184418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:43.184428 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:09:43.184438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:43.184450 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:09:43.184460 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:09:43.184471 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:09:43.184481 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:09:43.184506 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:09:43.184517 kernel: fuse: init (API version 7.39) Feb 13 19:09:43.184527 kernel: loop: module loaded Feb 13 19:09:43.184537 kernel: ACPI: bus type drm_connector registered Feb 13 19:09:43.184549 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:09:43.184560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:09:43.184571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:09:43.184582 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:09:43.184594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:09:43.184605 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:09:43.184616 systemd[1]: Stopped verity-setup.service. Feb 13 19:09:43.184645 systemd-journald[1104]: Collecting audit messages is disabled. Feb 13 19:09:43.184672 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:09:43.184684 systemd-journald[1104]: Journal started Feb 13 19:09:43.184706 systemd-journald[1104]: Runtime Journal (/run/log/journal/1eafcee237f24169b52d14418b3b6219) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:09:42.980842 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:09:43.002935 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:09:43.003275 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:09:43.187922 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:09:43.188568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:09:43.189801 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:09:43.190883 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:09:43.192081 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:09:43.193324 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:09:43.196181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:09:43.197538 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:09:43.200463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:09:43.200597 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:09:43.202017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:43.202204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:43.203642 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:09:43.203786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:09:43.205085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:43.205221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:43.208509 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:09:43.208634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:09:43.209985 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:43.210110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:43.213490 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:09:43.214954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:09:43.216428 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:09:43.228351 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:09:43.234240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:09:43.236304 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:09:43.237435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:09:43.237470 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:09:43.239431 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:09:43.241653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:09:43.243868 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:09:43.245086 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:43.246492 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:09:43.248472 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:09:43.249826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:09:43.253322 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:09:43.254569 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:09:43.256396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:09:43.256806 systemd-journald[1104]: Time spent on flushing to /var/log/journal/1eafcee237f24169b52d14418b3b6219 is 33.776ms for 860 entries. Feb 13 19:09:43.256806 systemd-journald[1104]: System Journal (/var/log/journal/1eafcee237f24169b52d14418b3b6219) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:09:43.307955 systemd-journald[1104]: Received client request to flush runtime journal. Feb 13 19:09:43.308000 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 19:09:43.308017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:09:43.262569 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:09:43.265394 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:09:43.270553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:09:43.272060 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:09:43.273423 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:09:43.275662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:09:43.277113 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:09:43.281562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:09:43.282925 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:09:43.297335 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:09:43.299533 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 19:09:43.299543 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Feb 13 19:09:43.302342 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:09:43.306233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:09:43.310353 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:09:43.313420 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:09:43.321475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:09:43.322044 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:09:43.327163 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 19:09:43.327322 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:09:43.343593 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:09:43.357358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:09:43.366277 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:09:43.373145 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:09:43.373187 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:09:43.377533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:09:43.408185 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:09:43.415186 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 19:09:43.421177 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 19:09:43.430361 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:09:43.432560 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 19:09:43.438897 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:09:43.439000 systemd[1]: Reloading... Feb 13 19:09:43.484171 zram_generator::config[1204]: No configuration found. Feb 13 19:09:43.578858 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:09:43.579621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:43.615323 systemd[1]: Reloading finished in 175 ms. Feb 13 19:09:43.644180 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:09:43.645763 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:09:43.655297 systemd[1]: Starting ensure-sysext.service... Feb 13 19:09:43.657392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:09:43.667256 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:09:43.667267 systemd[1]: Reloading... Feb 13 19:09:43.684696 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:09:43.685018 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:09:43.685752 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:09:43.685961 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 19:09:43.686009 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Feb 13 19:09:43.688591 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:09:43.688606 systemd-tmpfiles[1242]: Skipping /boot Feb 13 19:09:43.695207 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:09:43.695223 systemd-tmpfiles[1242]: Skipping /boot Feb 13 19:09:43.720560 zram_generator::config[1266]: No configuration found. Feb 13 19:09:43.802295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:09:43.837365 systemd[1]: Reloading finished in 169 ms. Feb 13 19:09:43.853974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:09:43.866567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:09:43.873747 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:09:43.876065 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:09:43.878399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:09:43.881402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:09:43.885472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:09:43.891515 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:09:43.896988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:43.900493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:09:43.903406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:09:43.905528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:43.906695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:43.907535 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:09:43.918525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:09:43.921560 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:09:43.922385 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Feb 13 19:09:43.923085 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:43.923254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:43.926683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:43.928434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:09:43.929601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:43.930135 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:09:43.935192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:09:43.936386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:09:43.937565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:09:43.939210 systemd[1]: Finished ensure-sysext.service. Feb 13 19:09:43.947468 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:09:43.949401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:09:43.953408 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:09:43.954952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:09:43.955088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:09:43.956634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:09:43.958192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:09:43.959745 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:09:43.961185 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:09:43.961325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:09:43.963067 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:09:43.963202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:09:43.981019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:09:43.982450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:09:43.982520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:09:43.982543 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:09:43.991685 augenrules[1369]: No rules Feb 13 19:09:43.992402 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:09:43.993848 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:09:43.994013 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:09:43.995944 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:09:44.025252 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) Feb 13 19:09:44.052470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:09:44.062337 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:09:44.078667 systemd-networkd[1359]: lo: Link UP Feb 13 19:09:44.078986 systemd-networkd[1359]: lo: Gained carrier Feb 13 19:09:44.082851 systemd-networkd[1359]: Enumeration completed Feb 13 19:09:44.083240 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:09:44.085902 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:44.085979 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:09:44.086817 systemd-networkd[1359]: eth0: Link UP Feb 13 19:09:44.086902 systemd-networkd[1359]: eth0: Gained carrier Feb 13 19:09:44.086954 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:09:44.093343 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:09:44.094592 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:09:44.096089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:09:44.098961 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:09:44.104348 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 19:09:44.106162 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:09:44.106197 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:09:44.108409 systemd-networkd[1359]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:09:44.109031 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Feb 13 19:09:44.109900 systemd-timesyncd[1336]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:09:44.109952 systemd-timesyncd[1336]: Initial clock synchronization to Thu 2025-02-13 19:09:43.882042 UTC. Feb 13 19:09:44.113061 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 19:09:44.125032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:09:44.127498 systemd[1]: Reached target network.target - Network. Feb 13 19:09:44.128438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:09:44.145412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:09:44.156508 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:09:44.160296 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:09:44.179295 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:09:44.189530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:09:44.229244 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:09:44.230798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:09:44.231979 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:09:44.233171 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:09:44.234406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:09:44.235865 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:09:44.237102 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:09:44.238438 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:09:44.239706 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:09:44.239744 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:09:44.240690 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:09:44.242509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:09:44.244970 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:09:44.253067 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:09:44.255343 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:09:44.256965 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:09:44.258264 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:09:44.259307 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:09:44.260382 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:09:44.260410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:09:44.261394 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:09:44.263353 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:09:44.265388 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:09:44.267392 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:09:44.271414 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:09:44.272739 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:09:44.274713 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:09:44.277590 jq[1410]: false Feb 13 19:09:44.278468 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:09:44.281372 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:09:44.284734 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:09:44.288526 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:09:44.291301 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:09:44.292398 extend-filesystems[1411]: Found loop3 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found loop4 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found loop5 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda1 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda2 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda3 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found usr Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda4 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda6 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda7 Feb 13 19:09:44.292398 extend-filesystems[1411]: Found vda9 Feb 13 19:09:44.292398 extend-filesystems[1411]: Checking size of /dev/vda9 Feb 13 19:09:44.291705 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:09:44.300509 dbus-daemon[1409]: [system] SELinux support is enabled Feb 13 19:09:44.292681 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:09:44.315276 extend-filesystems[1411]: Resized partition /dev/vda9 Feb 13 19:09:44.297319 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:09:44.302873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:09:44.313598 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:09:44.321554 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:09:44.325368 jq[1422]: true Feb 13 19:09:44.321714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:09:44.321964 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:09:44.322237 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:09:44.324630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:09:44.324776 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:09:44.328222 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:09:44.337751 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1362) Feb 13 19:09:44.337809 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:09:44.337868 update_engine[1421]: I20250213 19:09:44.333988 1421 main.cc:92] Flatcar Update Engine starting Feb 13 19:09:44.344441 update_engine[1421]: I20250213 19:09:44.344386 1421 update_check_scheduler.cc:74] Next update check in 3m32s Feb 13 19:09:44.362568 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:09:44.362366 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:09:44.383127 tar[1434]: linux-arm64/LICENSE Feb 13 19:09:44.368702 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:09:44.383452 jq[1435]: true Feb 13 19:09:44.372604 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:09:44.383673 tar[1434]: linux-arm64/helm Feb 13 19:09:44.372658 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:09:44.374295 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:09:44.374316 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:09:44.386355 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:09:44.389852 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:09:44.389852 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:09:44.389852 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:09:44.400748 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Feb 13 19:09:44.392939 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:09:44.393097 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:09:44.393775 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:09:44.394871 systemd-logind[1419]: New seat seat0. Feb 13 19:09:44.395887 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:09:44.435737 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:09:44.437368 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:09:44.439485 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:09:44.446968 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:09:44.549535 containerd[1444]: time="2025-02-13T19:09:44.549409520Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:09:44.583327 containerd[1444]: time="2025-02-13T19:09:44.582927760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584483 containerd[1444]: time="2025-02-13T19:09:44.584445160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584483 containerd[1444]: time="2025-02-13T19:09:44.584480240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:09:44.584578 containerd[1444]: time="2025-02-13T19:09:44.584498320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:09:44.584679 containerd[1444]: time="2025-02-13T19:09:44.584660280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:09:44.584722 containerd[1444]: time="2025-02-13T19:09:44.584681680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584793 containerd[1444]: time="2025-02-13T19:09:44.584733520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584793 containerd[1444]: time="2025-02-13T19:09:44.584748440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584932 containerd[1444]: time="2025-02-13T19:09:44.584911560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584960 containerd[1444]: time="2025-02-13T19:09:44.584931520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584960 containerd[1444]: time="2025-02-13T19:09:44.584945440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:44.584960 containerd[1444]: time="2025-02-13T19:09:44.584954840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.585064 containerd[1444]: time="2025-02-13T19:09:44.585028440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.585262 containerd[1444]: time="2025-02-13T19:09:44.585241920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:09:44.585376 containerd[1444]: time="2025-02-13T19:09:44.585356920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:09:44.585376 containerd[1444]: time="2025-02-13T19:09:44.585375120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:09:44.585486 containerd[1444]: time="2025-02-13T19:09:44.585447840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:09:44.585518 containerd[1444]: time="2025-02-13T19:09:44.585492240Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:09:44.589199 containerd[1444]: time="2025-02-13T19:09:44.589170720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:09:44.589263 containerd[1444]: time="2025-02-13T19:09:44.589233000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:09:44.589263 containerd[1444]: time="2025-02-13T19:09:44.589249280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:09:44.589418 containerd[1444]: time="2025-02-13T19:09:44.589263760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:09:44.589418 containerd[1444]: time="2025-02-13T19:09:44.589278200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:09:44.589467 containerd[1444]: time="2025-02-13T19:09:44.589436080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:09:44.589721 containerd[1444]: time="2025-02-13T19:09:44.589700960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:09:44.589830 containerd[1444]: time="2025-02-13T19:09:44.589811640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:09:44.589870 containerd[1444]: time="2025-02-13T19:09:44.589831800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:09:44.589870 containerd[1444]: time="2025-02-13T19:09:44.589847480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:09:44.589870 containerd[1444]: time="2025-02-13T19:09:44.589861400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589921 containerd[1444]: time="2025-02-13T19:09:44.589878640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589921 containerd[1444]: time="2025-02-13T19:09:44.589891760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589921 containerd[1444]: time="2025-02-13T19:09:44.589905880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589987 containerd[1444]: time="2025-02-13T19:09:44.589920200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589987 containerd[1444]: time="2025-02-13T19:09:44.589932840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589987 containerd[1444]: time="2025-02-13T19:09:44.589945280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589987 containerd[1444]: time="2025-02-13T19:09:44.589957360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:09:44.589987 containerd[1444]: time="2025-02-13T19:09:44.589979160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.589995120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.590008480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.590021640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.590034080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.590047280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590070 containerd[1444]: time="2025-02-13T19:09:44.590058920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590206 containerd[1444]: time="2025-02-13T19:09:44.590072880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590206 containerd[1444]: time="2025-02-13T19:09:44.590085360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590206 containerd[1444]: time="2025-02-13T19:09:44.590099000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590206 containerd[1444]: time="2025-02-13T19:09:44.590110760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590206 containerd[1444]: time="2025-02-13T19:09:44.590130600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590297 containerd[1444]: time="2025-02-13T19:09:44.590143160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590297 containerd[1444]: time="2025-02-13T19:09:44.590235440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:09:44.590297 containerd[1444]: time="2025-02-13T19:09:44.590263520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590297 containerd[1444]: time="2025-02-13T19:09:44.590277760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.590297 containerd[1444]: time="2025-02-13T19:09:44.590295960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591013680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591121320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591137120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591159880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591171280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591185120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591196120Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:09:44.591195 containerd[1444]: time="2025-02-13T19:09:44.591208240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:09:44.591611 containerd[1444]: time="2025-02-13T19:09:44.591553200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:09:44.591611 containerd[1444]: time="2025-02-13T19:09:44.591614160Z" level=info msg="Connect containerd service" Feb 13 19:09:44.591758 containerd[1444]: time="2025-02-13T19:09:44.591645640Z" level=info msg="using legacy CRI server" Feb 13 19:09:44.591758 containerd[1444]: time="2025-02-13T19:09:44.591652840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:09:44.591913 containerd[1444]: time="2025-02-13T19:09:44.591892280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:09:44.592598 containerd[1444]: time="2025-02-13T19:09:44.592572000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:09:44.592950 containerd[1444]: time="2025-02-13T19:09:44.592910280Z" level=info msg="Start subscribing containerd event" Feb 13 19:09:44.593215 containerd[1444]: time="2025-02-13T19:09:44.593080920Z" level=info msg="Start recovering state" Feb 13 19:09:44.594845 containerd[1444]: time="2025-02-13T19:09:44.594480840Z" level=info msg="Start event monitor" Feb 13 19:09:44.594845 containerd[1444]: time="2025-02-13T19:09:44.594634560Z" level=info msg="Start snapshots syncer" Feb 13 19:09:44.594845 containerd[1444]: time="2025-02-13T19:09:44.594647800Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:09:44.594845 containerd[1444]: time="2025-02-13T19:09:44.594659080Z" level=info msg="Start streaming server" Feb 13 19:09:44.595182 containerd[1444]: time="2025-02-13T19:09:44.595159120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:09:44.595633 containerd[1444]: time="2025-02-13T19:09:44.595610280Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:09:44.595871 containerd[1444]: time="2025-02-13T19:09:44.595802680Z" level=info msg="containerd successfully booted in 0.047377s" Feb 13 19:09:44.595879 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:09:44.753182 tar[1434]: linux-arm64/README.md Feb 13 19:09:44.763364 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:09:45.007965 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:09:45.026304 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:09:45.039469 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:09:45.044634 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:09:45.044829 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:09:45.047688 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:09:45.061682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:09:45.076547 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:09:45.078761 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:09:45.080116 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:09:45.118243 systemd-networkd[1359]: eth0: Gained IPv6LL Feb 13 19:09:45.121452 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:09:45.123214 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:09:45.136495 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:09:45.138968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:45.141222 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:09:45.154775 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:09:45.154941 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:09:45.158603 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:09:45.161251 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:09:45.694484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:45.696018 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:09:45.698515 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:09:45.701227 systemd[1]: Startup finished in 585ms (kernel) + 4.914s (initrd) + 3.114s (userspace) = 8.615s. Feb 13 19:09:46.103643 kubelet[1523]: E0213 19:09:46.103408 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:09:46.106282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:09:46.106420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:09:50.059156 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:09:50.060556 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:39138.service - OpenSSH per-connection server daemon (10.0.0.1:39138). Feb 13 19:09:50.149787 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 39138 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:50.152580 sshd-session[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:50.181093 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:09:50.195458 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:09:50.201632 systemd-logind[1419]: New session 1 of user core. Feb 13 19:09:50.212243 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:09:50.214783 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:09:50.222848 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:09:50.308178 systemd[1541]: Queued start job for default target default.target. Feb 13 19:09:50.318107 systemd[1541]: Created slice app.slice - User Application Slice. Feb 13 19:09:50.318170 systemd[1541]: Reached target paths.target - Paths. Feb 13 19:09:50.318184 systemd[1541]: Reached target timers.target - Timers. Feb 13 19:09:50.319463 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:09:50.332609 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:09:50.332719 systemd[1541]: Reached target sockets.target - Sockets. Feb 13 19:09:50.332736 systemd[1541]: Reached target basic.target - Basic System. Feb 13 19:09:50.332772 systemd[1541]: Reached target default.target - Main User Target. Feb 13 19:09:50.332799 systemd[1541]: Startup finished in 103ms. Feb 13 19:09:50.333286 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:09:50.337060 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:09:50.401396 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:39144.service - OpenSSH per-connection server daemon (10.0.0.1:39144). Feb 13 19:09:50.465801 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 39144 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:50.468673 sshd-session[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:50.473979 systemd-logind[1419]: New session 2 of user core. Feb 13 19:09:50.481360 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:09:50.532657 sshd[1554]: Connection closed by 10.0.0.1 port 39144 Feb 13 19:09:50.533092 sshd-session[1552]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:50.544465 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:39144.service: Deactivated successfully. Feb 13 19:09:50.545743 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:09:50.546969 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:09:50.548022 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Feb 13 19:09:50.548784 systemd-logind[1419]: Removed session 2. Feb 13 19:09:50.589580 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:50.590810 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:50.595753 systemd-logind[1419]: New session 3 of user core. Feb 13 19:09:50.601333 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:09:50.648773 sshd[1561]: Connection closed by 10.0.0.1 port 39156 Feb 13 19:09:50.649100 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:50.668867 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:39156.service: Deactivated successfully. Feb 13 19:09:50.671583 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:09:50.673075 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:09:50.674386 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:39172.service - OpenSSH per-connection server daemon (10.0.0.1:39172). Feb 13 19:09:50.675502 systemd-logind[1419]: Removed session 3. Feb 13 19:09:50.721037 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 39172 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:50.722292 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:50.726411 systemd-logind[1419]: New session 4 of user core. Feb 13 19:09:50.745337 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:09:50.801776 sshd[1568]: Connection closed by 10.0.0.1 port 39172 Feb 13 19:09:50.802091 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:50.818545 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:39172.service: Deactivated successfully. Feb 13 19:09:50.820628 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:09:50.821978 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:09:50.825537 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:39188.service - OpenSSH per-connection server daemon (10.0.0.1:39188). Feb 13 19:09:50.826482 systemd-logind[1419]: Removed session 4. Feb 13 19:09:50.873682 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 39188 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:50.874954 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:50.880207 systemd-logind[1419]: New session 5 of user core. Feb 13 19:09:50.899484 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:09:50.967208 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:09:50.967502 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:50.983028 sudo[1576]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:50.984834 sshd[1575]: Connection closed by 10.0.0.1 port 39188 Feb 13 19:09:50.985229 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:50.994531 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:39188.service: Deactivated successfully. Feb 13 19:09:50.995843 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:09:50.998347 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:09:50.999657 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:39198.service - OpenSSH per-connection server daemon (10.0.0.1:39198). Feb 13 19:09:51.000439 systemd-logind[1419]: Removed session 5. Feb 13 19:09:51.044547 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 39198 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:51.045733 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:51.049445 systemd-logind[1419]: New session 6 of user core. Feb 13 19:09:51.060286 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:09:51.111652 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:09:51.112268 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:51.115375 sudo[1585]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:51.119609 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:09:51.119859 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:51.143477 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:09:51.165273 augenrules[1607]: No rules Feb 13 19:09:51.166362 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:09:51.167230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:09:51.168155 sudo[1584]: pam_unix(sudo:session): session closed for user root Feb 13 19:09:51.169408 sshd[1583]: Connection closed by 10.0.0.1 port 39198 Feb 13 19:09:51.169728 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Feb 13 19:09:51.179278 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:39198.service: Deactivated successfully. Feb 13 19:09:51.181596 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:09:51.182808 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:09:51.196489 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:39200.service - OpenSSH per-connection server daemon (10.0.0.1:39200). Feb 13 19:09:51.197274 systemd-logind[1419]: Removed session 6. Feb 13 19:09:51.234459 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 39200 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:09:51.235617 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:09:51.239864 systemd-logind[1419]: New session 7 of user core. Feb 13 19:09:51.259341 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:09:51.311376 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:09:51.311700 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:09:51.636368 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:09:51.636506 (dockerd)[1640]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:09:51.889368 dockerd[1640]: time="2025-02-13T19:09:51.889247776Z" level=info msg="Starting up" Feb 13 19:09:52.052756 dockerd[1640]: time="2025-02-13T19:09:52.052718078Z" level=info msg="Loading containers: start." Feb 13 19:09:52.194205 kernel: Initializing XFRM netlink socket Feb 13 19:09:52.272259 systemd-networkd[1359]: docker0: Link UP Feb 13 19:09:52.302601 dockerd[1640]: time="2025-02-13T19:09:52.302499691Z" level=info msg="Loading containers: done." Feb 13 19:09:52.316597 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck308892281-merged.mount: Deactivated successfully. Feb 13 19:09:52.317667 dockerd[1640]: time="2025-02-13T19:09:52.317614031Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:09:52.317754 dockerd[1640]: time="2025-02-13T19:09:52.317702471Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:09:52.318035 dockerd[1640]: time="2025-02-13T19:09:52.317798031Z" level=info msg="Daemon has completed initialization" Feb 13 19:09:52.349424 dockerd[1640]: time="2025-02-13T19:09:52.349366641Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:09:52.349615 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:09:52.778184 containerd[1444]: time="2025-02-13T19:09:52.778133171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:09:53.384379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962540118.mount: Deactivated successfully. Feb 13 19:09:54.764827 containerd[1444]: time="2025-02-13T19:09:54.764778477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:54.765912 containerd[1444]: time="2025-02-13T19:09:54.765876159Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238" Feb 13 19:09:54.766561 containerd[1444]: time="2025-02-13T19:09:54.766532175Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:54.769384 containerd[1444]: time="2025-02-13T19:09:54.769327163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:54.770443 containerd[1444]: time="2025-02-13T19:09:54.770408745Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 1.992223464s" Feb 13 19:09:54.770497 containerd[1444]: time="2025-02-13T19:09:54.770443564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:09:54.771185 containerd[1444]: time="2025-02-13T19:09:54.771086731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:09:56.317381 containerd[1444]: time="2025-02-13T19:09:56.317331453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:56.319157 containerd[1444]: time="2025-02-13T19:09:56.319065811Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147" Feb 13 19:09:56.320180 containerd[1444]: time="2025-02-13T19:09:56.320061284Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:56.322872 containerd[1444]: time="2025-02-13T19:09:56.322845079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:56.323916 containerd[1444]: time="2025-02-13T19:09:56.323876514Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.552754983s" Feb 13 19:09:56.323916 containerd[1444]: time="2025-02-13T19:09:56.323911722Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:09:56.324386 containerd[1444]: time="2025-02-13T19:09:56.324362071Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:09:56.355882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:09:56.370308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:09:56.481570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:09:56.484990 (kubelet)[1902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:09:56.529366 kubelet[1902]: E0213 19:09:56.529309 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:09:56.532052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:09:56.532216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:09:57.838794 containerd[1444]: time="2025-02-13T19:09:57.838727100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:57.839335 containerd[1444]: time="2025-02-13T19:09:57.839293792Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802" Feb 13 19:09:57.840381 containerd[1444]: time="2025-02-13T19:09:57.840351968Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:57.843373 containerd[1444]: time="2025-02-13T19:09:57.843334208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:57.844415 containerd[1444]: time="2025-02-13T19:09:57.844380214Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.519989008s" Feb 13 19:09:57.844415 containerd[1444]: time="2025-02-13T19:09:57.844412349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:09:57.844872 containerd[1444]: time="2025-02-13T19:09:57.844843144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:09:58.776135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171745021.mount: Deactivated successfully. Feb 13 19:09:59.118934 containerd[1444]: time="2025-02-13T19:09:59.118882244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:59.119893 containerd[1444]: time="2025-02-13T19:09:59.119841852Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 19:09:59.120639 containerd[1444]: time="2025-02-13T19:09:59.120585095Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:59.122541 containerd[1444]: time="2025-02-13T19:09:59.122463371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:09:59.123558 containerd[1444]: time="2025-02-13T19:09:59.123425648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.278541528s" Feb 13 19:09:59.123558 containerd[1444]: time="2025-02-13T19:09:59.123458423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:09:59.123952 containerd[1444]: time="2025-02-13T19:09:59.123927475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:09:59.716587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914968472.mount: Deactivated successfully. Feb 13 19:10:00.573047 containerd[1444]: time="2025-02-13T19:10:00.572986807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:00.573694 containerd[1444]: time="2025-02-13T19:10:00.573643833Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Feb 13 19:10:00.574384 containerd[1444]: time="2025-02-13T19:10:00.574351145Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:00.578237 containerd[1444]: time="2025-02-13T19:10:00.578182290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:00.579356 containerd[1444]: time="2025-02-13T19:10:00.579321218Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.45535945s" Feb 13 19:10:00.579356 containerd[1444]: time="2025-02-13T19:10:00.579351939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:10:00.579812 containerd[1444]: time="2025-02-13T19:10:00.579766899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:10:01.009895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878199760.mount: Deactivated successfully. Feb 13 19:10:01.013402 containerd[1444]: time="2025-02-13T19:10:01.013353317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:01.014941 containerd[1444]: time="2025-02-13T19:10:01.014897429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:10:01.018000 containerd[1444]: time="2025-02-13T19:10:01.017467121Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:01.019650 containerd[1444]: time="2025-02-13T19:10:01.019599290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:01.020459 containerd[1444]: time="2025-02-13T19:10:01.020435390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 440.636689ms" Feb 13 19:10:01.020510 containerd[1444]: time="2025-02-13T19:10:01.020464731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:10:01.020946 containerd[1444]: time="2025-02-13T19:10:01.020904488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:10:01.904117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346613058.mount: Deactivated successfully. Feb 13 19:10:04.158514 containerd[1444]: time="2025-02-13T19:10:04.158313263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:04.159421 containerd[1444]: time="2025-02-13T19:10:04.159384764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Feb 13 19:10:04.160103 containerd[1444]: time="2025-02-13T19:10:04.160068820Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:04.163654 containerd[1444]: time="2025-02-13T19:10:04.163623038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:04.165055 containerd[1444]: time="2025-02-13T19:10:04.165029025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.144094908s" Feb 13 19:10:04.165266 containerd[1444]: time="2025-02-13T19:10:04.165163042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:10:06.605953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:10:06.616360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:06.755093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:06.759028 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:10:06.794698 kubelet[2064]: E0213 19:10:06.794645 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:10:06.797230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:10:06.797388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:10:08.917620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:08.929412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:08.953260 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Feb 13 19:10:08.953277 systemd[1]: Reloading... Feb 13 19:10:09.020221 zram_generator::config[2120]: No configuration found. Feb 13 19:10:09.205616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:10:09.257776 systemd[1]: Reloading finished in 304 ms. Feb 13 19:10:09.302482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:09.305210 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:10:09.305400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:09.306915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:09.409098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:09.413988 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:10:09.447063 kubelet[2166]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:10:09.447063 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:10:09.447063 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:10:09.447443 kubelet[2166]: I0213 19:10:09.447113 2166 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:10:10.114778 kubelet[2166]: I0213 19:10:10.114712 2166 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:10:10.116308 kubelet[2166]: I0213 19:10:10.114886 2166 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:10:10.116308 kubelet[2166]: I0213 19:10:10.115215 2166 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:10:10.156108 kubelet[2166]: E0213 19:10:10.156061 2166 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:10.159710 kubelet[2166]: I0213 19:10:10.159676 2166 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:10:10.167126 kubelet[2166]: E0213 19:10:10.167073 2166 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:10:10.167126 kubelet[2166]: I0213 19:10:10.167124 2166 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:10:10.170657 kubelet[2166]: I0213 19:10:10.170637 2166 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:10:10.170893 kubelet[2166]: I0213 19:10:10.170859 2166 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:10:10.171051 kubelet[2166]: I0213 19:10:10.170884 2166 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:10:10.171424 kubelet[2166]: I0213 19:10:10.171401 2166 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:10:10.171424 kubelet[2166]: I0213 19:10:10.171414 2166 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:10:10.171906 kubelet[2166]: I0213 19:10:10.171883 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:10:10.176619 kubelet[2166]: I0213 19:10:10.176592 2166 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:10:10.176619 kubelet[2166]: I0213 19:10:10.176620 2166 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:10:10.176692 kubelet[2166]: I0213 19:10:10.176643 2166 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:10:10.176692 kubelet[2166]: I0213 19:10:10.176652 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:10:10.182235 kubelet[2166]: W0213 19:10:10.181646 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:10.182235 kubelet[2166]: E0213 19:10:10.181719 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:10.182235 kubelet[2166]: I0213 19:10:10.181810 2166 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:10:10.182235 kubelet[2166]: W0213 19:10:10.182093 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:10.182235 kubelet[2166]: E0213 19:10:10.182137 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:10.184389 kubelet[2166]: I0213 19:10:10.184356 2166 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:10:10.185075 kubelet[2166]: W0213 19:10:10.184813 2166 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:10:10.185776 kubelet[2166]: I0213 19:10:10.185746 2166 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:10:10.186225 kubelet[2166]: I0213 19:10:10.185782 2166 server.go:1287] "Started kubelet" Feb 13 19:10:10.186225 kubelet[2166]: I0213 19:10:10.185872 2166 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:10:10.186225 kubelet[2166]: I0213 19:10:10.186075 2166 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:10:10.186449 kubelet[2166]: I0213 19:10:10.186417 2166 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:10:10.187880 kubelet[2166]: I0213 19:10:10.187855 2166 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:10:10.189609 kubelet[2166]: I0213 19:10:10.189589 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:10:10.190378 kubelet[2166]: I0213 19:10:10.190352 2166 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:10:10.191526 kubelet[2166]: E0213 19:10:10.191509 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:10:10.191658 kubelet[2166]: I0213 19:10:10.191646 2166 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:10:10.192029 kubelet[2166]: E0213 19:10:10.191987 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Feb 13 19:10:10.192029 kubelet[2166]: I0213 19:10:10.192006 2166 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:10:10.192220 kubelet[2166]: I0213 19:10:10.192116 2166 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:10:10.192356 kubelet[2166]: W0213 19:10:10.192325 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:10.192398 kubelet[2166]: E0213 19:10:10.192363 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:10.192625 kubelet[2166]: E0213 19:10:10.192316 2166 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823da36d837bd3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:10:10.185764157 +0000 UTC m=+0.768886335,LastTimestamp:2025-02-13 19:10:10.185764157 +0000 UTC m=+0.768886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:10:10.192983 kubelet[2166]: I0213 19:10:10.192961 2166 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:10:10.193173 kubelet[2166]: E0213 19:10:10.193045 2166 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:10:10.194334 kubelet[2166]: I0213 19:10:10.194186 2166 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:10:10.194334 kubelet[2166]: I0213 19:10:10.194202 2166 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:10:10.204409 kubelet[2166]: I0213 19:10:10.204249 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:10:10.205711 kubelet[2166]: I0213 19:10:10.205604 2166 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:10:10.205711 kubelet[2166]: I0213 19:10:10.205628 2166 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:10:10.205711 kubelet[2166]: I0213 19:10:10.205650 2166 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:10:10.205711 kubelet[2166]: I0213 19:10:10.205657 2166 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:10:10.205711 kubelet[2166]: E0213 19:10:10.205705 2166 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:10:10.209231 kubelet[2166]: W0213 19:10:10.209194 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:10.209231 kubelet[2166]: E0213 19:10:10.209235 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:10.209780 kubelet[2166]: I0213 19:10:10.209764 2166 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:10:10.209780 kubelet[2166]: I0213 19:10:10.209779 2166 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:10:10.209870 kubelet[2166]: I0213 19:10:10.209799 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:10:10.279124 kubelet[2166]: I0213 19:10:10.279085 2166 policy_none.go:49] "None policy: Start" Feb 13 19:10:10.279124 kubelet[2166]: I0213 19:10:10.279116 2166 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:10:10.279124 kubelet[2166]: I0213 19:10:10.279129 2166 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:10:10.284277 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:10:10.292440 kubelet[2166]: E0213 19:10:10.292399 2166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:10:10.296564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:10:10.299432 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:10:10.306729 kubelet[2166]: E0213 19:10:10.306693 2166 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:10:10.309972 kubelet[2166]: I0213 19:10:10.309943 2166 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:10:10.310324 kubelet[2166]: I0213 19:10:10.310183 2166 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:10:10.310324 kubelet[2166]: I0213 19:10:10.310203 2166 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:10:10.310863 kubelet[2166]: I0213 19:10:10.310718 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:10:10.311933 kubelet[2166]: E0213 19:10:10.311904 2166 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:10:10.312060 kubelet[2166]: E0213 19:10:10.311952 2166 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:10:10.393137 kubelet[2166]: E0213 19:10:10.392430 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Feb 13 19:10:10.411821 kubelet[2166]: I0213 19:10:10.411795 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:10.412343 kubelet[2166]: E0213 19:10:10.412248 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 19:10:10.515859 systemd[1]: Created slice kubepods-burstable-pod898e30b0d6d564836c93d34bfe274e28.slice - libcontainer container kubepods-burstable-pod898e30b0d6d564836c93d34bfe274e28.slice. Feb 13 19:10:10.546722 kubelet[2166]: E0213 19:10:10.546543 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:10.549623 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:10:10.551302 kubelet[2166]: E0213 19:10:10.551183 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:10.559576 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:10:10.561086 kubelet[2166]: E0213 19:10:10.561064 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:10.593374 kubelet[2166]: I0213 19:10:10.593336 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:10.593374 kubelet[2166]: I0213 19:10:10.593376 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:10.593534 kubelet[2166]: I0213 19:10:10.593428 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:10.593534 kubelet[2166]: I0213 19:10:10.593486 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:10.593534 kubelet[2166]: I0213 19:10:10.593522 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:10.593600 kubelet[2166]: I0213 19:10:10.593545 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:10:10.593600 kubelet[2166]: I0213 19:10:10.593561 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:10.593600 kubelet[2166]: I0213 19:10:10.593595 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:10.593662 kubelet[2166]: I0213 19:10:10.593611 2166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:10.614353 kubelet[2166]: I0213 19:10:10.614319 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:10.614717 kubelet[2166]: E0213 19:10:10.614684 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 19:10:10.793015 kubelet[2166]: E0213 19:10:10.792901 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Feb 13 19:10:10.851422 kubelet[2166]: E0213 19:10:10.851378 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:10.851853 kubelet[2166]: E0213 19:10:10.851669 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:10.852119 containerd[1444]: time="2025-02-13T19:10:10.852052113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:898e30b0d6d564836c93d34bfe274e28,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:10.852119 containerd[1444]: time="2025-02-13T19:10:10.852105978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:10.863930 kubelet[2166]: E0213 19:10:10.863845 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:10.864423 containerd[1444]: time="2025-02-13T19:10:10.864334886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:11.016391 kubelet[2166]: I0213 19:10:11.016348 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:11.016763 kubelet[2166]: E0213 19:10:11.016725 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 19:10:11.111791 kubelet[2166]: W0213 19:10:11.111692 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:11.111791 kubelet[2166]: E0213 19:10:11.111760 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:11.193897 kubelet[2166]: W0213 19:10:11.193811 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:11.193995 kubelet[2166]: E0213 19:10:11.193937 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:11.264289 kubelet[2166]: W0213 19:10:11.264173 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:11.264289 kubelet[2166]: E0213 19:10:11.264248 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:11.385514 kubelet[2166]: W0213 19:10:11.385374 2166 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Feb 13 19:10:11.385514 kubelet[2166]: E0213 19:10:11.385446 2166 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:11.593921 kubelet[2166]: E0213 19:10:11.593863 2166 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="1.6s" Feb 13 19:10:11.737467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260195639.mount: Deactivated successfully. Feb 13 19:10:11.744122 containerd[1444]: time="2025-02-13T19:10:11.744024568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:10:11.746883 containerd[1444]: time="2025-02-13T19:10:11.746824730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:10:11.747790 containerd[1444]: time="2025-02-13T19:10:11.747740719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:10:11.749211 containerd[1444]: time="2025-02-13T19:10:11.749140360Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:10:11.750071 containerd[1444]: time="2025-02-13T19:10:11.749891456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:10:11.751380 containerd[1444]: time="2025-02-13T19:10:11.751325187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:10:11.753172 containerd[1444]: time="2025-02-13T19:10:11.752983399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:10:11.753863 containerd[1444]: time="2025-02-13T19:10:11.753816062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:10:11.755654 containerd[1444]: time="2025-02-13T19:10:11.755617707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 891.205698ms" Feb 13 19:10:11.757754 containerd[1444]: time="2025-02-13T19:10:11.757713292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 905.528793ms" Feb 13 19:10:11.759687 containerd[1444]: time="2025-02-13T19:10:11.759649419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 907.515868ms" Feb 13 19:10:11.818392 kubelet[2166]: I0213 19:10:11.818353 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:11.818777 kubelet[2166]: E0213 19:10:11.818728 2166 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Feb 13 19:10:11.928693 containerd[1444]: time="2025-02-13T19:10:11.928260896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:11.928693 containerd[1444]: time="2025-02-13T19:10:11.928339267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:11.928693 containerd[1444]: time="2025-02-13T19:10:11.928356851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.928693 containerd[1444]: time="2025-02-13T19:10:11.928430026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.929546 containerd[1444]: time="2025-02-13T19:10:11.929360363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:11.929546 containerd[1444]: time="2025-02-13T19:10:11.929413795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:11.929546 containerd[1444]: time="2025-02-13T19:10:11.929439812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.929546 containerd[1444]: time="2025-02-13T19:10:11.929506913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.933425 containerd[1444]: time="2025-02-13T19:10:11.933014568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:11.933425 containerd[1444]: time="2025-02-13T19:10:11.933076993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:11.933425 containerd[1444]: time="2025-02-13T19:10:11.933092939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.933425 containerd[1444]: time="2025-02-13T19:10:11.933172389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:11.947368 systemd[1]: Started cri-containerd-54f294d0603f866119d4cf9fa4c4c0ce1adeb05bdbf7e668b7dfdb8ba15a134f.scope - libcontainer container 54f294d0603f866119d4cf9fa4c4c0ce1adeb05bdbf7e668b7dfdb8ba15a134f. Feb 13 19:10:11.951674 systemd[1]: Started cri-containerd-0c3b07c1af91a5208b20fe9819d3eca2f24974a732a3917abba4f7fcd78a8185.scope - libcontainer container 0c3b07c1af91a5208b20fe9819d3eca2f24974a732a3917abba4f7fcd78a8185. Feb 13 19:10:11.952601 systemd[1]: Started cri-containerd-6460a3c6a86fb0924b39d2325cb92cd3a5fcfb8f9fc4381a315a7426d84cc787.scope - libcontainer container 6460a3c6a86fb0924b39d2325cb92cd3a5fcfb8f9fc4381a315a7426d84cc787. Feb 13 19:10:11.981923 containerd[1444]: time="2025-02-13T19:10:11.981757225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:898e30b0d6d564836c93d34bfe274e28,Namespace:kube-system,Attempt:0,} returns sandbox id \"54f294d0603f866119d4cf9fa4c4c0ce1adeb05bdbf7e668b7dfdb8ba15a134f\"" Feb 13 19:10:11.984089 kubelet[2166]: E0213 19:10:11.983900 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:11.987684 containerd[1444]: time="2025-02-13T19:10:11.987499462Z" level=info msg="CreateContainer within sandbox \"54f294d0603f866119d4cf9fa4c4c0ce1adeb05bdbf7e668b7dfdb8ba15a134f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:10:11.994066 containerd[1444]: time="2025-02-13T19:10:11.994024727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"6460a3c6a86fb0924b39d2325cb92cd3a5fcfb8f9fc4381a315a7426d84cc787\"" Feb 13 19:10:11.995112 kubelet[2166]: E0213 19:10:11.995076 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:11.995772 containerd[1444]: time="2025-02-13T19:10:11.995731895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c3b07c1af91a5208b20fe9819d3eca2f24974a732a3917abba4f7fcd78a8185\"" Feb 13 19:10:11.996252 kubelet[2166]: E0213 19:10:11.996228 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:11.997743 containerd[1444]: time="2025-02-13T19:10:11.997639287Z" level=info msg="CreateContainer within sandbox \"0c3b07c1af91a5208b20fe9819d3eca2f24974a732a3917abba4f7fcd78a8185\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:10:11.998399 containerd[1444]: time="2025-02-13T19:10:11.998318846Z" level=info msg="CreateContainer within sandbox \"6460a3c6a86fb0924b39d2325cb92cd3a5fcfb8f9fc4381a315a7426d84cc787\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:10:12.007271 containerd[1444]: time="2025-02-13T19:10:12.007219560Z" level=info msg="CreateContainer within sandbox \"54f294d0603f866119d4cf9fa4c4c0ce1adeb05bdbf7e668b7dfdb8ba15a134f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a51f75987e93766daece1da4feeb93cd84373e9773672054105e661e29e95ed5\"" Feb 13 19:10:12.008030 containerd[1444]: time="2025-02-13T19:10:12.007941321Z" level=info msg="StartContainer for \"a51f75987e93766daece1da4feeb93cd84373e9773672054105e661e29e95ed5\"" Feb 13 19:10:12.014644 containerd[1444]: time="2025-02-13T19:10:12.014598406Z" level=info msg="CreateContainer within sandbox \"0c3b07c1af91a5208b20fe9819d3eca2f24974a732a3917abba4f7fcd78a8185\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9bb0a1486488ffb4aa47965ed06d97f23cab08ebd57dc38f8bbf13027648cb07\"" Feb 13 19:10:12.015191 containerd[1444]: time="2025-02-13T19:10:12.015123199Z" level=info msg="StartContainer for \"9bb0a1486488ffb4aa47965ed06d97f23cab08ebd57dc38f8bbf13027648cb07\"" Feb 13 19:10:12.032354 systemd[1]: Started cri-containerd-a51f75987e93766daece1da4feeb93cd84373e9773672054105e661e29e95ed5.scope - libcontainer container a51f75987e93766daece1da4feeb93cd84373e9773672054105e661e29e95ed5. Feb 13 19:10:12.035745 systemd[1]: Started cri-containerd-9bb0a1486488ffb4aa47965ed06d97f23cab08ebd57dc38f8bbf13027648cb07.scope - libcontainer container 9bb0a1486488ffb4aa47965ed06d97f23cab08ebd57dc38f8bbf13027648cb07. Feb 13 19:10:12.066636 containerd[1444]: time="2025-02-13T19:10:12.066449172Z" level=info msg="StartContainer for \"a51f75987e93766daece1da4feeb93cd84373e9773672054105e661e29e95ed5\" returns successfully" Feb 13 19:10:12.074821 containerd[1444]: time="2025-02-13T19:10:12.074776923Z" level=info msg="StartContainer for \"9bb0a1486488ffb4aa47965ed06d97f23cab08ebd57dc38f8bbf13027648cb07\" returns successfully" Feb 13 19:10:12.075184 containerd[1444]: time="2025-02-13T19:10:12.074799786Z" level=info msg="CreateContainer within sandbox \"6460a3c6a86fb0924b39d2325cb92cd3a5fcfb8f9fc4381a315a7426d84cc787\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4221eeb201ed07910a08c6b0298609e35f266b8d45d3d721ec4baa2395780b03\"" Feb 13 19:10:12.075743 containerd[1444]: time="2025-02-13T19:10:12.075677466Z" level=info msg="StartContainer for \"4221eeb201ed07910a08c6b0298609e35f266b8d45d3d721ec4baa2395780b03\"" Feb 13 19:10:12.117346 systemd[1]: Started cri-containerd-4221eeb201ed07910a08c6b0298609e35f266b8d45d3d721ec4baa2395780b03.scope - libcontainer container 4221eeb201ed07910a08c6b0298609e35f266b8d45d3d721ec4baa2395780b03. Feb 13 19:10:12.173605 containerd[1444]: time="2025-02-13T19:10:12.173502790Z" level=info msg="StartContainer for \"4221eeb201ed07910a08c6b0298609e35f266b8d45d3d721ec4baa2395780b03\" returns successfully" Feb 13 19:10:12.217803 kubelet[2166]: E0213 19:10:12.217384 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:12.217803 kubelet[2166]: E0213 19:10:12.217546 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.225328 kubelet[2166]: E0213 19:10:12.220400 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:12.225328 kubelet[2166]: E0213 19:10:12.220516 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.226452 kubelet[2166]: E0213 19:10:12.226319 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:12.226491 kubelet[2166]: E0213 19:10:12.226458 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:12.232424 kubelet[2166]: E0213 19:10:12.232297 2166 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:10:13.225636 kubelet[2166]: E0213 19:10:13.225558 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:13.226024 kubelet[2166]: E0213 19:10:13.225673 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:13.226024 kubelet[2166]: E0213 19:10:13.225714 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:13.226024 kubelet[2166]: E0213 19:10:13.225787 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:13.420952 kubelet[2166]: I0213 19:10:13.420269 2166 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:13.960851 kubelet[2166]: E0213 19:10:13.960797 2166 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:10:14.019612 kubelet[2166]: E0213 19:10:14.019572 2166 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:10:14.019722 kubelet[2166]: E0213 19:10:14.019714 2166 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:14.119992 kubelet[2166]: I0213 19:10:14.119917 2166 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:10:14.184786 kubelet[2166]: I0213 19:10:14.184740 2166 apiserver.go:52] "Watching apiserver" Feb 13 19:10:14.192605 kubelet[2166]: I0213 19:10:14.192553 2166 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:10:14.192605 kubelet[2166]: I0213 19:10:14.192559 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:14.200196 kubelet[2166]: E0213 19:10:14.200146 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:14.200196 kubelet[2166]: I0213 19:10:14.200194 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:14.202458 kubelet[2166]: E0213 19:10:14.202413 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:14.202458 kubelet[2166]: I0213 19:10:14.202450 2166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:10:14.204348 kubelet[2166]: E0213 19:10:14.204315 2166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:10:15.870011 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Feb 13 19:10:15.870029 systemd[1]: Reloading... Feb 13 19:10:15.937207 zram_generator::config[2480]: No configuration found. Feb 13 19:10:16.019721 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:10:16.083831 systemd[1]: Reloading finished in 213 ms. Feb 13 19:10:16.114725 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:16.130293 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:10:16.130532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:16.130586 systemd[1]: kubelet.service: Consumed 1.172s CPU time, 122.8M memory peak, 0B memory swap peak. Feb 13 19:10:16.142431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:10:16.244220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:10:16.255577 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:10:16.301946 kubelet[2523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:10:16.301946 kubelet[2523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:10:16.301946 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:10:16.302342 kubelet[2523]: I0213 19:10:16.301997 2523 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:10:16.309124 kubelet[2523]: I0213 19:10:16.308821 2523 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:10:16.309124 kubelet[2523]: I0213 19:10:16.308861 2523 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:10:16.309806 kubelet[2523]: I0213 19:10:16.309736 2523 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:10:16.311239 kubelet[2523]: I0213 19:10:16.311216 2523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:10:16.314202 kubelet[2523]: I0213 19:10:16.314164 2523 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:10:16.317878 kubelet[2523]: E0213 19:10:16.317834 2523 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:10:16.317878 kubelet[2523]: I0213 19:10:16.317879 2523 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:10:16.320551 kubelet[2523]: I0213 19:10:16.320510 2523 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:10:16.320792 kubelet[2523]: I0213 19:10:16.320759 2523 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:10:16.321002 kubelet[2523]: I0213 19:10:16.320788 2523 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:10:16.321002 kubelet[2523]: I0213 19:10:16.321000 2523 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:10:16.321125 kubelet[2523]: I0213 19:10:16.321010 2523 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:10:16.321125 kubelet[2523]: I0213 19:10:16.321059 2523 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:10:16.321260 kubelet[2523]: I0213 19:10:16.321238 2523 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:10:16.321260 kubelet[2523]: I0213 19:10:16.321253 2523 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:10:16.321317 kubelet[2523]: I0213 19:10:16.321272 2523 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:10:16.321317 kubelet[2523]: I0213 19:10:16.321295 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:10:16.323143 kubelet[2523]: I0213 19:10:16.322175 2523 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:10:16.323143 kubelet[2523]: I0213 19:10:16.322726 2523 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:10:16.326184 kubelet[2523]: I0213 19:10:16.324484 2523 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:10:16.326184 kubelet[2523]: I0213 19:10:16.324531 2523 server.go:1287] "Started kubelet" Feb 13 19:10:16.327479 kubelet[2523]: I0213 19:10:16.327443 2523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:10:16.331797 kubelet[2523]: I0213 19:10:16.331748 2523 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:10:16.332053 kubelet[2523]: I0213 19:10:16.332008 2523 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:10:16.332793 kubelet[2523]: I0213 19:10:16.332765 2523 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:10:16.332915 kubelet[2523]: I0213 19:10:16.332889 2523 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:10:16.332986 kubelet[2523]: I0213 19:10:16.332872 2523 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:10:16.333059 kubelet[2523]: E0213 19:10:16.333035 2523 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:10:16.333393 kubelet[2523]: I0213 19:10:16.333373 2523 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:10:16.333659 kubelet[2523]: I0213 19:10:16.333645 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:10:16.337463 kubelet[2523]: I0213 19:10:16.337417 2523 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:10:16.338464 kubelet[2523]: I0213 19:10:16.338433 2523 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:10:16.338565 kubelet[2523]: I0213 19:10:16.338542 2523 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:10:16.341374 kubelet[2523]: I0213 19:10:16.341343 2523 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:10:16.347095 kubelet[2523]: I0213 19:10:16.346919 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:10:16.348488 kubelet[2523]: I0213 19:10:16.348456 2523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:10:16.348488 kubelet[2523]: I0213 19:10:16.348484 2523 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:10:16.348951 kubelet[2523]: I0213 19:10:16.348504 2523 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:10:16.348951 kubelet[2523]: I0213 19:10:16.348512 2523 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:10:16.348951 kubelet[2523]: E0213 19:10:16.348558 2523 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:10:16.357215 kubelet[2523]: E0213 19:10:16.357183 2523 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:10:16.382619 kubelet[2523]: I0213 19:10:16.382509 2523 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:10:16.382619 kubelet[2523]: I0213 19:10:16.382532 2523 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:10:16.382619 kubelet[2523]: I0213 19:10:16.382570 2523 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:10:16.382765 kubelet[2523]: I0213 19:10:16.382748 2523 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:10:16.382786 kubelet[2523]: I0213 19:10:16.382759 2523 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:10:16.382786 kubelet[2523]: I0213 19:10:16.382778 2523 policy_none.go:49] "None policy: Start" Feb 13 19:10:16.382786 kubelet[2523]: I0213 19:10:16.382786 2523 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:10:16.382862 kubelet[2523]: I0213 19:10:16.382795 2523 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:10:16.382933 kubelet[2523]: I0213 19:10:16.382903 2523 state_mem.go:75] "Updated machine memory state" Feb 13 19:10:16.387359 kubelet[2523]: I0213 19:10:16.387329 2523 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:10:16.387843 kubelet[2523]: I0213 19:10:16.387643 2523 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:10:16.387843 kubelet[2523]: I0213 19:10:16.387663 2523 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:10:16.387951 kubelet[2523]: I0213 19:10:16.387900 2523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:10:16.390529 kubelet[2523]: E0213 19:10:16.390493 2523 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:10:16.449241 kubelet[2523]: I0213 19:10:16.449195 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:10:16.449641 kubelet[2523]: I0213 19:10:16.449507 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.449641 kubelet[2523]: I0213 19:10:16.449580 2523 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:16.491697 kubelet[2523]: I0213 19:10:16.491667 2523 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:10:16.498629 kubelet[2523]: I0213 19:10:16.498573 2523 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:10:16.498770 kubelet[2523]: I0213 19:10:16.498667 2523 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:10:16.635009 kubelet[2523]: I0213 19:10:16.634888 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:16.635009 kubelet[2523]: I0213 19:10:16.634926 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.635009 kubelet[2523]: I0213 19:10:16.634949 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:16.635009 kubelet[2523]: I0213 19:10:16.634965 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/898e30b0d6d564836c93d34bfe274e28-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"898e30b0d6d564836c93d34bfe274e28\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:10:16.635009 kubelet[2523]: I0213 19:10:16.634981 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.635241 kubelet[2523]: I0213 19:10:16.634997 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.635241 kubelet[2523]: I0213 19:10:16.635037 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.635241 kubelet[2523]: I0213 19:10:16.635084 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:10:16.635241 kubelet[2523]: I0213 19:10:16.635106 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:10:16.760237 kubelet[2523]: E0213 19:10:16.760197 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:16.761272 kubelet[2523]: E0213 19:10:16.761223 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:16.761362 kubelet[2523]: E0213 19:10:16.761329 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:16.870975 sudo[2561]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:10:16.871295 sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:10:17.289326 sudo[2561]: pam_unix(sudo:session): session closed for user root Feb 13 19:10:17.322551 kubelet[2523]: I0213 19:10:17.322498 2523 apiserver.go:52] "Watching apiserver" Feb 13 19:10:17.333972 kubelet[2523]: I0213 19:10:17.333933 2523 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:10:17.367871 kubelet[2523]: E0213 19:10:17.367714 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:17.367871 kubelet[2523]: E0213 19:10:17.367722 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:17.369945 kubelet[2523]: E0213 19:10:17.368909 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:17.397815 kubelet[2523]: I0213 19:10:17.397754 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.397735497 podStartE2EDuration="1.397735497s" podCreationTimestamp="2025-02-13 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:17.397466125 +0000 UTC m=+1.137023252" watchObservedRunningTime="2025-02-13 19:10:17.397735497 +0000 UTC m=+1.137292584" Feb 13 19:10:17.397990 kubelet[2523]: I0213 19:10:17.397869 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.397865263 podStartE2EDuration="1.397865263s" podCreationTimestamp="2025-02-13 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:17.389358455 +0000 UTC m=+1.128915542" watchObservedRunningTime="2025-02-13 19:10:17.397865263 +0000 UTC m=+1.137422350" Feb 13 19:10:17.419751 kubelet[2523]: I0213 19:10:17.419685 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.419663924 podStartE2EDuration="1.419663924s" podCreationTimestamp="2025-02-13 19:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:17.405183219 +0000 UTC m=+1.144740346" watchObservedRunningTime="2025-02-13 19:10:17.419663924 +0000 UTC m=+1.159221011" Feb 13 19:10:18.369169 kubelet[2523]: E0213 19:10:18.369127 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:18.370158 kubelet[2523]: E0213 19:10:18.369752 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:18.697360 sudo[1618]: pam_unix(sudo:session): session closed for user root Feb 13 19:10:18.698981 sshd[1617]: Connection closed by 10.0.0.1 port 39200 Feb 13 19:10:18.699524 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:18.702823 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:39200.service: Deactivated successfully. Feb 13 19:10:18.704687 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:10:18.704923 systemd[1]: session-7.scope: Consumed 6.847s CPU time, 155.4M memory peak, 0B memory swap peak. Feb 13 19:10:18.709550 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:10:18.713094 systemd-logind[1419]: Removed session 7. Feb 13 19:10:19.370608 kubelet[2523]: E0213 19:10:19.370544 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:19.371199 kubelet[2523]: E0213 19:10:19.370816 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:21.854060 kubelet[2523]: E0213 19:10:21.854016 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:22.656368 kubelet[2523]: I0213 19:10:22.656326 2523 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:10:22.656704 containerd[1444]: time="2025-02-13T19:10:22.656666953Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:10:22.657372 kubelet[2523]: I0213 19:10:22.656898 2523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:10:23.315935 systemd[1]: Created slice kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice - libcontainer container kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice. Feb 13 19:10:23.320071 systemd[1]: Created slice kubepods-besteffort-pod07c52943_fa5d_4e30_a7e6_7742b521d773.slice - libcontainer container kubepods-besteffort-pod07c52943_fa5d_4e30_a7e6_7742b521d773.slice. Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377256 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cni-path\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377300 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07c52943-fa5d-4e30-a7e6-7742b521d773-xtables-lock\") pod \"kube-proxy-8wtx2\" (UID: \"07c52943-fa5d-4e30-a7e6-7742b521d773\") " pod="kube-system/kube-proxy-8wtx2" Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377324 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07c52943-fa5d-4e30-a7e6-7742b521d773-lib-modules\") pod \"kube-proxy-8wtx2\" (UID: \"07c52943-fa5d-4e30-a7e6-7742b521d773\") " pod="kube-system/kube-proxy-8wtx2" Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377339 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-run\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377356 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3311fe6e-28b9-4c53-9640-842a0d4502be-clustermesh-secrets\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377457 kubelet[2523]: I0213 19:10:23.377372 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-kernel\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377385 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-hubble-tls\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377402 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-etc-cni-netd\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377427 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-xtables-lock\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377442 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn4mc\" (UniqueName: \"kubernetes.io/projected/07c52943-fa5d-4e30-a7e6-7742b521d773-kube-api-access-qn4mc\") pod \"kube-proxy-8wtx2\" (UID: \"07c52943-fa5d-4e30-a7e6-7742b521d773\") " pod="kube-system/kube-proxy-8wtx2" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377457 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-cgroup\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.377938 kubelet[2523]: I0213 19:10:23.377477 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-lib-modules\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377493 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-config-path\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377507 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07c52943-fa5d-4e30-a7e6-7742b521d773-kube-proxy\") pod \"kube-proxy-8wtx2\" (UID: \"07c52943-fa5d-4e30-a7e6-7742b521d773\") " pod="kube-system/kube-proxy-8wtx2" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377522 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-bpf-maps\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377549 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-hostproc\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377563 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-net\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.378105 kubelet[2523]: I0213 19:10:23.377578 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw584\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-kube-api-access-zw584\") pod \"cilium-zcn87\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " pod="kube-system/cilium-zcn87" Feb 13 19:10:23.602246 systemd[1]: Created slice kubepods-besteffort-pod53beb2b7_1c5b_4a76_a054_da7a423da2ba.slice - libcontainer container kubepods-besteffort-pod53beb2b7_1c5b_4a76_a054_da7a423da2ba.slice. Feb 13 19:10:23.626509 kubelet[2523]: E0213 19:10:23.624800 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.627294 containerd[1444]: time="2025-02-13T19:10:23.627256034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcn87,Uid:3311fe6e-28b9-4c53-9640-842a0d4502be,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:23.629762 kubelet[2523]: E0213 19:10:23.629546 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.630412 containerd[1444]: time="2025-02-13T19:10:23.630336049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wtx2,Uid:07c52943-fa5d-4e30-a7e6-7742b521d773,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:23.654922 containerd[1444]: time="2025-02-13T19:10:23.654808964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:23.654922 containerd[1444]: time="2025-02-13T19:10:23.654897767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:23.655098 containerd[1444]: time="2025-02-13T19:10:23.654914487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.655098 containerd[1444]: time="2025-02-13T19:10:23.655054772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.658365 containerd[1444]: time="2025-02-13T19:10:23.657372483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:23.658365 containerd[1444]: time="2025-02-13T19:10:23.657421165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:23.658365 containerd[1444]: time="2025-02-13T19:10:23.657435725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.658365 containerd[1444]: time="2025-02-13T19:10:23.657508408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.679361 systemd[1]: Started cri-containerd-d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062.scope - libcontainer container d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062. Feb 13 19:10:23.679974 kubelet[2523]: I0213 19:10:23.679943 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcrz\" (UniqueName: \"kubernetes.io/projected/53beb2b7-1c5b-4a76-a054-da7a423da2ba-kube-api-access-2fcrz\") pod \"cilium-operator-6c4d7847fc-l52mg\" (UID: \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\") " pod="kube-system/cilium-operator-6c4d7847fc-l52mg" Feb 13 19:10:23.680341 kubelet[2523]: I0213 19:10:23.680320 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53beb2b7-1c5b-4a76-a054-da7a423da2ba-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l52mg\" (UID: \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\") " pod="kube-system/cilium-operator-6c4d7847fc-l52mg" Feb 13 19:10:23.682525 systemd[1]: Started cri-containerd-4a8a972af6cfdde7a5b7ace0979bfabb4f5832c17730c66cc6b67e816982ce78.scope - libcontainer container 4a8a972af6cfdde7a5b7ace0979bfabb4f5832c17730c66cc6b67e816982ce78. Feb 13 19:10:23.704126 containerd[1444]: time="2025-02-13T19:10:23.704083925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcn87,Uid:3311fe6e-28b9-4c53-9640-842a0d4502be,Namespace:kube-system,Attempt:0,} returns sandbox id \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\"" Feb 13 19:10:23.705186 kubelet[2523]: E0213 19:10:23.704961 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.706462 containerd[1444]: time="2025-02-13T19:10:23.706364036Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:10:23.714757 containerd[1444]: time="2025-02-13T19:10:23.714639211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wtx2,Uid:07c52943-fa5d-4e30-a7e6-7742b521d773,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a8a972af6cfdde7a5b7ace0979bfabb4f5832c17730c66cc6b67e816982ce78\"" Feb 13 19:10:23.715426 kubelet[2523]: E0213 19:10:23.715392 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.719089 containerd[1444]: time="2025-02-13T19:10:23.718997786Z" level=info msg="CreateContainer within sandbox \"4a8a972af6cfdde7a5b7ace0979bfabb4f5832c17730c66cc6b67e816982ce78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:10:23.739882 containerd[1444]: time="2025-02-13T19:10:23.739820709Z" level=info msg="CreateContainer within sandbox \"4a8a972af6cfdde7a5b7ace0979bfabb4f5832c17730c66cc6b67e816982ce78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c178fe78b1b6dbdc08994bb3b6285f57ae8ca48fc4f8ce71b53877384c40be50\"" Feb 13 19:10:23.740520 containerd[1444]: time="2025-02-13T19:10:23.740482249Z" level=info msg="StartContainer for \"c178fe78b1b6dbdc08994bb3b6285f57ae8ca48fc4f8ce71b53877384c40be50\"" Feb 13 19:10:23.776341 systemd[1]: Started cri-containerd-c178fe78b1b6dbdc08994bb3b6285f57ae8ca48fc4f8ce71b53877384c40be50.scope - libcontainer container c178fe78b1b6dbdc08994bb3b6285f57ae8ca48fc4f8ce71b53877384c40be50. Feb 13 19:10:23.812117 containerd[1444]: time="2025-02-13T19:10:23.812062779Z" level=info msg="StartContainer for \"c178fe78b1b6dbdc08994bb3b6285f57ae8ca48fc4f8ce71b53877384c40be50\" returns successfully" Feb 13 19:10:23.908140 kubelet[2523]: E0213 19:10:23.908048 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:23.908537 containerd[1444]: time="2025-02-13T19:10:23.908498916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l52mg,Uid:53beb2b7-1c5b-4a76-a054-da7a423da2ba,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:23.934116 containerd[1444]: time="2025-02-13T19:10:23.932991232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:23.934116 containerd[1444]: time="2025-02-13T19:10:23.933048714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:23.934116 containerd[1444]: time="2025-02-13T19:10:23.933060514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.934116 containerd[1444]: time="2025-02-13T19:10:23.933130476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:23.950359 systemd[1]: Started cri-containerd-05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91.scope - libcontainer container 05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91. Feb 13 19:10:23.989339 containerd[1444]: time="2025-02-13T19:10:23.989299570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l52mg,Uid:53beb2b7-1c5b-4a76-a054-da7a423da2ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\"" Feb 13 19:10:23.990454 kubelet[2523]: E0213 19:10:23.990165 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:24.382469 kubelet[2523]: E0213 19:10:24.382418 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:28.232091 kubelet[2523]: E0213 19:10:28.231999 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:28.232337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652092806.mount: Deactivated successfully. Feb 13 19:10:28.254138 kubelet[2523]: I0213 19:10:28.254078 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8wtx2" podStartSLOduration=5.254059979 podStartE2EDuration="5.254059979s" podCreationTimestamp="2025-02-13 19:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:24.391556192 +0000 UTC m=+8.131113279" watchObservedRunningTime="2025-02-13 19:10:28.254059979 +0000 UTC m=+11.993617026" Feb 13 19:10:28.413807 kubelet[2523]: E0213 19:10:28.413629 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:28.479018 kubelet[2523]: E0213 19:10:28.478372 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:29.412415 kubelet[2523]: E0213 19:10:29.412377 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:30.093284 update_engine[1421]: I20250213 19:10:30.093212 1421 update_attempter.cc:509] Updating boot flags... Feb 13 19:10:30.219198 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2922) Feb 13 19:10:30.268209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2925) Feb 13 19:10:30.714416 containerd[1444]: time="2025-02-13T19:10:30.714354789Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:30.714970 containerd[1444]: time="2025-02-13T19:10:30.714919921Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:10:30.715734 containerd[1444]: time="2025-02-13T19:10:30.715697698Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:30.717425 containerd[1444]: time="2025-02-13T19:10:30.717284852Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.010859935s" Feb 13 19:10:30.717425 containerd[1444]: time="2025-02-13T19:10:30.717323373Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:10:30.721081 containerd[1444]: time="2025-02-13T19:10:30.720839168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:10:30.721586 containerd[1444]: time="2025-02-13T19:10:30.721520783Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:10:30.750241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096219660.mount: Deactivated successfully. Feb 13 19:10:30.752380 containerd[1444]: time="2025-02-13T19:10:30.752328922Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\"" Feb 13 19:10:30.754143 containerd[1444]: time="2025-02-13T19:10:30.753138139Z" level=info msg="StartContainer for \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\"" Feb 13 19:10:30.772429 systemd[1]: run-containerd-runc-k8s.io-04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b-runc.jJJYjx.mount: Deactivated successfully. Feb 13 19:10:30.785353 systemd[1]: Started cri-containerd-04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b.scope - libcontainer container 04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b. Feb 13 19:10:30.807630 containerd[1444]: time="2025-02-13T19:10:30.807587184Z" level=info msg="StartContainer for \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\" returns successfully" Feb 13 19:10:30.906736 systemd[1]: cri-containerd-04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b.scope: Deactivated successfully. Feb 13 19:10:30.938142 containerd[1444]: time="2025-02-13T19:10:30.933235352Z" level=info msg="shim disconnected" id=04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b namespace=k8s.io Feb 13 19:10:30.938142 containerd[1444]: time="2025-02-13T19:10:30.938139417Z" level=warning msg="cleaning up after shim disconnected" id=04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b namespace=k8s.io Feb 13 19:10:30.938378 containerd[1444]: time="2025-02-13T19:10:30.938168257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:31.431934 kubelet[2523]: E0213 19:10:31.431869 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:31.435089 containerd[1444]: time="2025-02-13T19:10:31.434194860Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:10:31.462130 containerd[1444]: time="2025-02-13T19:10:31.462079668Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\"" Feb 13 19:10:31.466337 containerd[1444]: time="2025-02-13T19:10:31.466021948Z" level=info msg="StartContainer for \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\"" Feb 13 19:10:31.495371 systemd[1]: Started cri-containerd-f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621.scope - libcontainer container f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621. Feb 13 19:10:31.523695 containerd[1444]: time="2025-02-13T19:10:31.522863305Z" level=info msg="StartContainer for \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\" returns successfully" Feb 13 19:10:31.546670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:10:31.546921 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:10:31.546996 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:10:31.552587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:10:31.553019 systemd[1]: cri-containerd-f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621.scope: Deactivated successfully. Feb 13 19:10:31.571454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:10:31.598975 containerd[1444]: time="2025-02-13T19:10:31.598734650Z" level=info msg="shim disconnected" id=f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621 namespace=k8s.io Feb 13 19:10:31.598975 containerd[1444]: time="2025-02-13T19:10:31.598791051Z" level=warning msg="cleaning up after shim disconnected" id=f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621 namespace=k8s.io Feb 13 19:10:31.598975 containerd[1444]: time="2025-02-13T19:10:31.598799011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:31.748485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b-rootfs.mount: Deactivated successfully. Feb 13 19:10:31.871213 kubelet[2523]: E0213 19:10:31.870677 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:32.137746 containerd[1444]: time="2025-02-13T19:10:32.137690289Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:32.138510 containerd[1444]: time="2025-02-13T19:10:32.138275901Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:10:32.139126 containerd[1444]: time="2025-02-13T19:10:32.139082076Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:10:32.140779 containerd[1444]: time="2025-02-13T19:10:32.140740589Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.419860539s" Feb 13 19:10:32.140826 containerd[1444]: time="2025-02-13T19:10:32.140779229Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:10:32.143182 containerd[1444]: time="2025-02-13T19:10:32.143131955Z" level=info msg="CreateContainer within sandbox \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:10:32.154699 containerd[1444]: time="2025-02-13T19:10:32.154645778Z" level=info msg="CreateContainer within sandbox \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\"" Feb 13 19:10:32.156542 containerd[1444]: time="2025-02-13T19:10:32.155987044Z" level=info msg="StartContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\"" Feb 13 19:10:32.184348 systemd[1]: Started cri-containerd-7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45.scope - libcontainer container 7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45. Feb 13 19:10:32.216059 containerd[1444]: time="2025-02-13T19:10:32.215997287Z" level=info msg="StartContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" returns successfully" Feb 13 19:10:32.439387 kubelet[2523]: E0213 19:10:32.439283 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:32.443625 kubelet[2523]: E0213 19:10:32.443585 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:32.446041 containerd[1444]: time="2025-02-13T19:10:32.445991506Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:10:32.471983 containerd[1444]: time="2025-02-13T19:10:32.471936089Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\"" Feb 13 19:10:32.472574 containerd[1444]: time="2025-02-13T19:10:32.472550180Z" level=info msg="StartContainer for \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\"" Feb 13 19:10:32.499643 kubelet[2523]: I0213 19:10:32.499565 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l52mg" podStartSLOduration=1.349742987 podStartE2EDuration="9.499543344s" podCreationTimestamp="2025-02-13 19:10:23 +0000 UTC" firstStartedPulling="2025-02-13 19:10:23.991851089 +0000 UTC m=+7.731408176" lastFinishedPulling="2025-02-13 19:10:32.141651446 +0000 UTC m=+15.881208533" observedRunningTime="2025-02-13 19:10:32.470895188 +0000 UTC m=+16.210452275" watchObservedRunningTime="2025-02-13 19:10:32.499543344 +0000 UTC m=+16.239100431" Feb 13 19:10:32.518499 systemd[1]: Started cri-containerd-653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08.scope - libcontainer container 653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08. Feb 13 19:10:32.565923 containerd[1444]: time="2025-02-13T19:10:32.565846909Z" level=info msg="StartContainer for \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\" returns successfully" Feb 13 19:10:32.571401 systemd[1]: cri-containerd-653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08.scope: Deactivated successfully. Feb 13 19:10:32.602203 containerd[1444]: time="2025-02-13T19:10:32.602112172Z" level=info msg="shim disconnected" id=653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08 namespace=k8s.io Feb 13 19:10:32.602203 containerd[1444]: time="2025-02-13T19:10:32.602179253Z" level=warning msg="cleaning up after shim disconnected" id=653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08 namespace=k8s.io Feb 13 19:10:32.602203 containerd[1444]: time="2025-02-13T19:10:32.602188973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:33.448165 kubelet[2523]: E0213 19:10:33.448054 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:33.448165 kubelet[2523]: E0213 19:10:33.448100 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:33.450218 containerd[1444]: time="2025-02-13T19:10:33.450175282Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:10:33.483099 containerd[1444]: time="2025-02-13T19:10:33.482916927Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\"" Feb 13 19:10:33.483602 containerd[1444]: time="2025-02-13T19:10:33.483521898Z" level=info msg="StartContainer for \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\"" Feb 13 19:10:33.519355 systemd[1]: Started cri-containerd-f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b.scope - libcontainer container f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b. Feb 13 19:10:33.539395 systemd[1]: cri-containerd-f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b.scope: Deactivated successfully. Feb 13 19:10:33.540458 containerd[1444]: time="2025-02-13T19:10:33.539861139Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice/cri-containerd-f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b.scope/memory.events\": no such file or directory" Feb 13 19:10:33.542308 containerd[1444]: time="2025-02-13T19:10:33.542244823Z" level=info msg="StartContainer for \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\" returns successfully" Feb 13 19:10:33.563260 containerd[1444]: time="2025-02-13T19:10:33.563204130Z" level=info msg="shim disconnected" id=f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b namespace=k8s.io Feb 13 19:10:33.563260 containerd[1444]: time="2025-02-13T19:10:33.563255091Z" level=warning msg="cleaning up after shim disconnected" id=f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b namespace=k8s.io Feb 13 19:10:33.563260 containerd[1444]: time="2025-02-13T19:10:33.563263211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:10:33.748134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b-rootfs.mount: Deactivated successfully. Feb 13 19:10:34.454691 kubelet[2523]: E0213 19:10:34.454651 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:34.458917 containerd[1444]: time="2025-02-13T19:10:34.458878886Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:10:34.484581 containerd[1444]: time="2025-02-13T19:10:34.484469657Z" level=info msg="CreateContainer within sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\"" Feb 13 19:10:34.485140 containerd[1444]: time="2025-02-13T19:10:34.485112748Z" level=info msg="StartContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\"" Feb 13 19:10:34.514311 systemd[1]: Started cri-containerd-c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d.scope - libcontainer container c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d. Feb 13 19:10:34.536908 containerd[1444]: time="2025-02-13T19:10:34.536862540Z" level=info msg="StartContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" returns successfully" Feb 13 19:10:34.688617 kubelet[2523]: I0213 19:10:34.688575 2523 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:10:34.724567 systemd[1]: Created slice kubepods-burstable-pode76ad659_2b08_4edb_a928_3c13c9d3b2b6.slice - libcontainer container kubepods-burstable-pode76ad659_2b08_4edb_a928_3c13c9d3b2b6.slice. Feb 13 19:10:34.733198 systemd[1]: Created slice kubepods-burstable-pod6bcc6516_083e_439a_81a6_80653dfefdb9.slice - libcontainer container kubepods-burstable-pod6bcc6516_083e_439a_81a6_80653dfefdb9.slice. Feb 13 19:10:34.855001 kubelet[2523]: I0213 19:10:34.854953 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e76ad659-2b08-4edb-a928-3c13c9d3b2b6-config-volume\") pod \"coredns-668d6bf9bc-z2fg9\" (UID: \"e76ad659-2b08-4edb-a928-3c13c9d3b2b6\") " pod="kube-system/coredns-668d6bf9bc-z2fg9" Feb 13 19:10:34.855514 kubelet[2523]: I0213 19:10:34.855351 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpth\" (UniqueName: \"kubernetes.io/projected/6bcc6516-083e-439a-81a6-80653dfefdb9-kube-api-access-cbpth\") pod \"coredns-668d6bf9bc-2vm9t\" (UID: \"6bcc6516-083e-439a-81a6-80653dfefdb9\") " pod="kube-system/coredns-668d6bf9bc-2vm9t" Feb 13 19:10:34.855514 kubelet[2523]: I0213 19:10:34.855385 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9gwt\" (UniqueName: \"kubernetes.io/projected/e76ad659-2b08-4edb-a928-3c13c9d3b2b6-kube-api-access-x9gwt\") pod \"coredns-668d6bf9bc-z2fg9\" (UID: \"e76ad659-2b08-4edb-a928-3c13c9d3b2b6\") " pod="kube-system/coredns-668d6bf9bc-z2fg9" Feb 13 19:10:34.855514 kubelet[2523]: I0213 19:10:34.855442 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bcc6516-083e-439a-81a6-80653dfefdb9-config-volume\") pod \"coredns-668d6bf9bc-2vm9t\" (UID: \"6bcc6516-083e-439a-81a6-80653dfefdb9\") " pod="kube-system/coredns-668d6bf9bc-2vm9t" Feb 13 19:10:35.030996 kubelet[2523]: E0213 19:10:35.030891 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:35.032033 containerd[1444]: time="2025-02-13T19:10:35.031971878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z2fg9,Uid:e76ad659-2b08-4edb-a928-3c13c9d3b2b6,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:35.038720 kubelet[2523]: E0213 19:10:35.038677 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:35.040144 containerd[1444]: time="2025-02-13T19:10:35.040005813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vm9t,Uid:6bcc6516-083e-439a-81a6-80653dfefdb9,Namespace:kube-system,Attempt:0,}" Feb 13 19:10:35.460298 kubelet[2523]: E0213 19:10:35.459900 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:36.460831 kubelet[2523]: E0213 19:10:36.460747 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:36.768628 systemd-networkd[1359]: cilium_host: Link UP Feb 13 19:10:36.768749 systemd-networkd[1359]: cilium_net: Link UP Feb 13 19:10:36.768752 systemd-networkd[1359]: cilium_net: Gained carrier Feb 13 19:10:36.768900 systemd-networkd[1359]: cilium_host: Gained carrier Feb 13 19:10:36.845316 systemd-networkd[1359]: cilium_vxlan: Link UP Feb 13 19:10:36.845322 systemd-networkd[1359]: cilium_vxlan: Gained carrier Feb 13 19:10:37.148240 kernel: NET: Registered PF_ALG protocol family Feb 13 19:10:37.247302 systemd-networkd[1359]: cilium_host: Gained IPv6LL Feb 13 19:10:37.462052 kubelet[2523]: E0213 19:10:37.461889 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:37.697587 systemd-networkd[1359]: lxc_health: Link UP Feb 13 19:10:37.712248 systemd-networkd[1359]: lxc_health: Gained carrier Feb 13 19:10:37.790342 systemd-networkd[1359]: cilium_net: Gained IPv6LL Feb 13 19:10:38.183846 systemd-networkd[1359]: lxc904253056127: Link UP Feb 13 19:10:38.190941 systemd-networkd[1359]: lxcd5eaf169d753: Link UP Feb 13 19:10:38.205196 kernel: eth0: renamed from tmp9702b Feb 13 19:10:38.214178 kernel: eth0: renamed from tmpe3d08 Feb 13 19:10:38.221866 systemd-networkd[1359]: lxc904253056127: Gained carrier Feb 13 19:10:38.226641 systemd-networkd[1359]: lxcd5eaf169d753: Gained carrier Feb 13 19:10:38.466169 kubelet[2523]: E0213 19:10:38.466051 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:38.814341 systemd-networkd[1359]: cilium_vxlan: Gained IPv6LL Feb 13 19:10:39.390386 systemd-networkd[1359]: lxc904253056127: Gained IPv6LL Feb 13 19:10:39.625394 kubelet[2523]: E0213 19:10:39.625351 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:39.650844 kubelet[2523]: I0213 19:10:39.650688 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zcn87" podStartSLOduration=9.636394562 podStartE2EDuration="16.650672055s" podCreationTimestamp="2025-02-13 19:10:23 +0000 UTC" firstStartedPulling="2025-02-13 19:10:23.705890021 +0000 UTC m=+7.445447108" lastFinishedPulling="2025-02-13 19:10:30.720167554 +0000 UTC m=+14.459724601" observedRunningTime="2025-02-13 19:10:35.475517417 +0000 UTC m=+19.215074504" watchObservedRunningTime="2025-02-13 19:10:39.650672055 +0000 UTC m=+23.390229142" Feb 13 19:10:39.902264 systemd-networkd[1359]: lxcd5eaf169d753: Gained IPv6LL Feb 13 19:10:40.037822 systemd-networkd[1359]: lxc_health: Gained IPv6LL Feb 13 19:10:40.470140 kubelet[2523]: E0213 19:10:40.470099 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:41.756598 containerd[1444]: time="2025-02-13T19:10:41.756504362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:41.756598 containerd[1444]: time="2025-02-13T19:10:41.756564803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:41.756598 containerd[1444]: time="2025-02-13T19:10:41.756575323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:41.757119 containerd[1444]: time="2025-02-13T19:10:41.756649124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:41.757119 containerd[1444]: time="2025-02-13T19:10:41.756826687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:10:41.757119 containerd[1444]: time="2025-02-13T19:10:41.756879447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:10:41.757119 containerd[1444]: time="2025-02-13T19:10:41.756894287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:41.757681 containerd[1444]: time="2025-02-13T19:10:41.757630097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:10:41.782372 systemd[1]: Started cri-containerd-9702b8c346b6ca7848f9925dbaccc254d8abc55710e03a6c27ee17e2081608f1.scope - libcontainer container 9702b8c346b6ca7848f9925dbaccc254d8abc55710e03a6c27ee17e2081608f1. Feb 13 19:10:41.783490 systemd[1]: Started cri-containerd-e3d083d8fba7edcd005eabf4728e9d4bfc52589f9f3fde6911f9c7ea36cfd9eb.scope - libcontainer container e3d083d8fba7edcd005eabf4728e9d4bfc52589f9f3fde6911f9c7ea36cfd9eb. Feb 13 19:10:41.794560 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:10:41.798257 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:10:41.812873 containerd[1444]: time="2025-02-13T19:10:41.812820772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z2fg9,Uid:e76ad659-2b08-4edb-a928-3c13c9d3b2b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d083d8fba7edcd005eabf4728e9d4bfc52589f9f3fde6911f9c7ea36cfd9eb\"" Feb 13 19:10:41.813748 kubelet[2523]: E0213 19:10:41.813724 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:41.815740 containerd[1444]: time="2025-02-13T19:10:41.815687409Z" level=info msg="CreateContainer within sandbox \"e3d083d8fba7edcd005eabf4728e9d4bfc52589f9f3fde6911f9c7ea36cfd9eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:10:41.819751 containerd[1444]: time="2025-02-13T19:10:41.819647501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vm9t,Uid:6bcc6516-083e-439a-81a6-80653dfefdb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9702b8c346b6ca7848f9925dbaccc254d8abc55710e03a6c27ee17e2081608f1\"" Feb 13 19:10:41.820990 kubelet[2523]: E0213 19:10:41.820963 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:41.823651 containerd[1444]: time="2025-02-13T19:10:41.823617192Z" level=info msg="CreateContainer within sandbox \"9702b8c346b6ca7848f9925dbaccc254d8abc55710e03a6c27ee17e2081608f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:10:41.835578 containerd[1444]: time="2025-02-13T19:10:41.835505626Z" level=info msg="CreateContainer within sandbox \"e3d083d8fba7edcd005eabf4728e9d4bfc52589f9f3fde6911f9c7ea36cfd9eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9cc80e489a753e8f8ea162cd4d56d0ce09a1e00765e5f9e5c5bed224de4ae97\"" Feb 13 19:10:41.836374 containerd[1444]: time="2025-02-13T19:10:41.836317677Z" level=info msg="StartContainer for \"a9cc80e489a753e8f8ea162cd4d56d0ce09a1e00765e5f9e5c5bed224de4ae97\"" Feb 13 19:10:41.838964 containerd[1444]: time="2025-02-13T19:10:41.838893470Z" level=info msg="CreateContainer within sandbox \"9702b8c346b6ca7848f9925dbaccc254d8abc55710e03a6c27ee17e2081608f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9753be58bd54663b0587d0d89954814773ea9880a8bcf43dafe8f713f6f29a12\"" Feb 13 19:10:41.839457 containerd[1444]: time="2025-02-13T19:10:41.839426917Z" level=info msg="StartContainer for \"9753be58bd54663b0587d0d89954814773ea9880a8bcf43dafe8f713f6f29a12\"" Feb 13 19:10:41.865364 systemd[1]: Started cri-containerd-a9cc80e489a753e8f8ea162cd4d56d0ce09a1e00765e5f9e5c5bed224de4ae97.scope - libcontainer container a9cc80e489a753e8f8ea162cd4d56d0ce09a1e00765e5f9e5c5bed224de4ae97. Feb 13 19:10:41.868992 systemd[1]: Started cri-containerd-9753be58bd54663b0587d0d89954814773ea9880a8bcf43dafe8f713f6f29a12.scope - libcontainer container 9753be58bd54663b0587d0d89954814773ea9880a8bcf43dafe8f713f6f29a12. Feb 13 19:10:41.906292 containerd[1444]: time="2025-02-13T19:10:41.906173782Z" level=info msg="StartContainer for \"a9cc80e489a753e8f8ea162cd4d56d0ce09a1e00765e5f9e5c5bed224de4ae97\" returns successfully" Feb 13 19:10:41.906292 containerd[1444]: time="2025-02-13T19:10:41.906213502Z" level=info msg="StartContainer for \"9753be58bd54663b0587d0d89954814773ea9880a8bcf43dafe8f713f6f29a12\" returns successfully" Feb 13 19:10:42.475088 kubelet[2523]: E0213 19:10:42.474989 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:42.478810 kubelet[2523]: E0213 19:10:42.478783 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:42.486789 kubelet[2523]: I0213 19:10:42.486738 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vm9t" podStartSLOduration=19.486723498 podStartE2EDuration="19.486723498s" podCreationTimestamp="2025-02-13 19:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:42.486404054 +0000 UTC m=+26.225961101" watchObservedRunningTime="2025-02-13 19:10:42.486723498 +0000 UTC m=+26.226280585" Feb 13 19:10:42.497560 kubelet[2523]: I0213 19:10:42.497486 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z2fg9" podStartSLOduration=19.497470032 podStartE2EDuration="19.497470032s" podCreationTimestamp="2025-02-13 19:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:10:42.497224988 +0000 UTC m=+26.236782075" watchObservedRunningTime="2025-02-13 19:10:42.497470032 +0000 UTC m=+26.237027119" Feb 13 19:10:42.910075 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:57946.service - OpenSSH per-connection server daemon (10.0.0.1:57946). Feb 13 19:10:42.958960 sshd[3930]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:42.959750 sshd-session[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:42.964146 systemd-logind[1419]: New session 8 of user core. Feb 13 19:10:42.976328 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:10:43.116965 sshd[3932]: Connection closed by 10.0.0.1 port 57946 Feb 13 19:10:43.117495 sshd-session[3930]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:43.120592 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:57946.service: Deactivated successfully. Feb 13 19:10:43.122452 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:10:43.123181 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:10:43.123914 systemd-logind[1419]: Removed session 8. Feb 13 19:10:43.481427 kubelet[2523]: E0213 19:10:43.481392 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:43.482366 kubelet[2523]: E0213 19:10:43.481883 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:44.488762 kubelet[2523]: E0213 19:10:44.488706 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:44.493287 kubelet[2523]: E0213 19:10:44.489003 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:10:48.142330 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:57986.service - OpenSSH per-connection server daemon (10.0.0.1:57986). Feb 13 19:10:48.190874 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 57986 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:48.192279 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:48.197890 systemd-logind[1419]: New session 9 of user core. Feb 13 19:10:48.213080 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:10:48.344491 sshd[3947]: Connection closed by 10.0.0.1 port 57986 Feb 13 19:10:48.344686 sshd-session[3945]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:48.347451 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:57986.service: Deactivated successfully. Feb 13 19:10:48.349401 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:10:48.351162 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:10:48.353727 systemd-logind[1419]: Removed session 9. Feb 13 19:10:53.358696 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:53428.service - OpenSSH per-connection server daemon (10.0.0.1:53428). Feb 13 19:10:53.410061 sshd[3961]: Accepted publickey for core from 10.0.0.1 port 53428 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:53.411417 sshd-session[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:53.416831 systemd-logind[1419]: New session 10 of user core. Feb 13 19:10:53.426397 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:10:53.569172 sshd[3963]: Connection closed by 10.0.0.1 port 53428 Feb 13 19:10:53.570630 sshd-session[3961]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:53.574316 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:53428.service: Deactivated successfully. Feb 13 19:10:53.576980 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:10:53.578890 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:10:53.580722 systemd-logind[1419]: Removed session 10. Feb 13 19:10:58.579647 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:53474.service - OpenSSH per-connection server daemon (10.0.0.1:53474). Feb 13 19:10:58.625325 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 53474 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:58.626380 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:58.630098 systemd-logind[1419]: New session 11 of user core. Feb 13 19:10:58.639293 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:10:58.760230 sshd[3982]: Connection closed by 10.0.0.1 port 53474 Feb 13 19:10:58.760568 sshd-session[3980]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:58.775742 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:53474.service: Deactivated successfully. Feb 13 19:10:58.779264 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:10:58.780557 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:10:58.781695 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:53482.service - OpenSSH per-connection server daemon (10.0.0.1:53482). Feb 13 19:10:58.782604 systemd-logind[1419]: Removed session 11. Feb 13 19:10:58.829082 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 53482 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:58.830558 sshd-session[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:58.834910 systemd-logind[1419]: New session 12 of user core. Feb 13 19:10:58.846325 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:10:58.991999 sshd[3997]: Connection closed by 10.0.0.1 port 53482 Feb 13 19:10:58.992516 sshd-session[3995]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:59.007353 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:53482.service: Deactivated successfully. Feb 13 19:10:59.008909 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:10:59.011129 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:10:59.018946 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Feb 13 19:10:59.021859 systemd-logind[1419]: Removed session 12. Feb 13 19:10:59.057122 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:10:59.058240 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:10:59.061681 systemd-logind[1419]: New session 13 of user core. Feb 13 19:10:59.079280 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:10:59.190480 sshd[4010]: Connection closed by 10.0.0.1 port 53498 Feb 13 19:10:59.190845 sshd-session[4008]: pam_unix(sshd:session): session closed for user core Feb 13 19:10:59.194057 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:53498.service: Deactivated successfully. Feb 13 19:10:59.197107 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:10:59.197828 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:10:59.198800 systemd-logind[1419]: Removed session 13. Feb 13 19:11:04.204879 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:57228.service - OpenSSH per-connection server daemon (10.0.0.1:57228). Feb 13 19:11:04.248935 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 57228 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:04.250355 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:04.255122 systemd-logind[1419]: New session 14 of user core. Feb 13 19:11:04.263331 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:11:04.381186 sshd[4025]: Connection closed by 10.0.0.1 port 57228 Feb 13 19:11:04.381681 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:04.384885 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:57228.service: Deactivated successfully. Feb 13 19:11:04.387835 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:11:04.388736 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:11:04.389538 systemd-logind[1419]: Removed session 14. Feb 13 19:11:09.391742 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:57238.service - OpenSSH per-connection server daemon (10.0.0.1:57238). Feb 13 19:11:09.434609 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 57238 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:09.435681 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:09.439684 systemd-logind[1419]: New session 15 of user core. Feb 13 19:11:09.455318 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:11:09.563972 sshd[4039]: Connection closed by 10.0.0.1 port 57238 Feb 13 19:11:09.564334 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:09.574653 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:57238.service: Deactivated successfully. Feb 13 19:11:09.576390 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:11:09.577774 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:11:09.579063 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:57248.service - OpenSSH per-connection server daemon (10.0.0.1:57248). Feb 13 19:11:09.580321 systemd-logind[1419]: Removed session 15. Feb 13 19:11:09.622286 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 57248 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:09.623463 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:09.627681 systemd-logind[1419]: New session 16 of user core. Feb 13 19:11:09.637302 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:11:09.865191 sshd[4053]: Connection closed by 10.0.0.1 port 57248 Feb 13 19:11:09.866402 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:09.875962 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:57248.service: Deactivated successfully. Feb 13 19:11:09.878032 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:11:09.880039 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:11:09.885848 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:57260.service - OpenSSH per-connection server daemon (10.0.0.1:57260). Feb 13 19:11:09.887590 systemd-logind[1419]: Removed session 16. Feb 13 19:11:09.936144 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 57260 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:09.937412 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:09.940986 systemd-logind[1419]: New session 17 of user core. Feb 13 19:11:09.950398 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:11:10.749958 sshd[4066]: Connection closed by 10.0.0.1 port 57260 Feb 13 19:11:10.751922 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:10.760796 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:57260.service: Deactivated successfully. Feb 13 19:11:10.764843 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:11:10.766929 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:11:10.775677 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:57276.service - OpenSSH per-connection server daemon (10.0.0.1:57276). Feb 13 19:11:10.778129 systemd-logind[1419]: Removed session 17. Feb 13 19:11:10.825591 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 57276 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:10.827772 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:10.831951 systemd-logind[1419]: New session 18 of user core. Feb 13 19:11:10.851344 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:11:11.068388 sshd[4086]: Connection closed by 10.0.0.1 port 57276 Feb 13 19:11:11.068838 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:11.077498 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:57276.service: Deactivated successfully. Feb 13 19:11:11.080463 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:11:11.082210 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:11:11.093815 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:57284.service - OpenSSH per-connection server daemon (10.0.0.1:57284). Feb 13 19:11:11.094909 systemd-logind[1419]: Removed session 18. Feb 13 19:11:11.132427 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 57284 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:11.132928 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:11.136990 systemd-logind[1419]: New session 19 of user core. Feb 13 19:11:11.146382 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:11:11.259224 sshd[4099]: Connection closed by 10.0.0.1 port 57284 Feb 13 19:11:11.259572 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:11.261991 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:57284.service: Deactivated successfully. Feb 13 19:11:11.263643 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:11:11.264972 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:11:11.266049 systemd-logind[1419]: Removed session 19. Feb 13 19:11:16.269650 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:41664.service - OpenSSH per-connection server daemon (10.0.0.1:41664). Feb 13 19:11:16.311897 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 41664 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:16.313223 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:16.318229 systemd-logind[1419]: New session 20 of user core. Feb 13 19:11:16.323342 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:11:16.441210 sshd[4118]: Connection closed by 10.0.0.1 port 41664 Feb 13 19:11:16.441853 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:16.444486 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:41664.service: Deactivated successfully. Feb 13 19:11:16.446342 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:11:16.447828 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:11:16.448863 systemd-logind[1419]: Removed session 20. Feb 13 19:11:21.456508 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). Feb 13 19:11:21.511117 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:21.512497 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:21.516638 systemd-logind[1419]: New session 21 of user core. Feb 13 19:11:21.525338 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:11:21.635221 sshd[4135]: Connection closed by 10.0.0.1 port 41668 Feb 13 19:11:21.635753 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:21.639106 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:41668.service: Deactivated successfully. Feb 13 19:11:21.641407 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:11:21.642405 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:11:21.643264 systemd-logind[1419]: Removed session 21. Feb 13 19:11:26.645617 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:38172.service - OpenSSH per-connection server daemon (10.0.0.1:38172). Feb 13 19:11:26.688599 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 38172 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:26.689697 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:26.693220 systemd-logind[1419]: New session 22 of user core. Feb 13 19:11:26.700286 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:11:26.804902 sshd[4152]: Connection closed by 10.0.0.1 port 38172 Feb 13 19:11:26.805284 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:26.818351 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:38172.service: Deactivated successfully. Feb 13 19:11:26.819724 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:11:26.820910 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:11:26.822303 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:38180.service - OpenSSH per-connection server daemon (10.0.0.1:38180). Feb 13 19:11:26.823082 systemd-logind[1419]: Removed session 22. Feb 13 19:11:26.863836 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 38180 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:26.864983 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:26.868986 systemd-logind[1419]: New session 23 of user core. Feb 13 19:11:26.878289 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:11:28.402642 containerd[1444]: time="2025-02-13T19:11:28.402475992Z" level=info msg="StopContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" with timeout 30 (s)" Feb 13 19:11:28.403877 containerd[1444]: time="2025-02-13T19:11:28.403849112Z" level=info msg="Stop container \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" with signal terminated" Feb 13 19:11:28.417256 systemd[1]: cri-containerd-7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45.scope: Deactivated successfully. Feb 13 19:11:28.437136 containerd[1444]: time="2025-02-13T19:11:28.437076381Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:11:28.437608 containerd[1444]: time="2025-02-13T19:11:28.437405590Z" level=info msg="StopContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" with timeout 2 (s)" Feb 13 19:11:28.438101 containerd[1444]: time="2025-02-13T19:11:28.438053409Z" level=info msg="Stop container \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" with signal terminated" Feb 13 19:11:28.439245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45-rootfs.mount: Deactivated successfully. Feb 13 19:11:28.445364 systemd-networkd[1359]: lxc_health: Link DOWN Feb 13 19:11:28.445370 systemd-networkd[1359]: lxc_health: Lost carrier Feb 13 19:11:28.449362 containerd[1444]: time="2025-02-13T19:11:28.449307930Z" level=info msg="shim disconnected" id=7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45 namespace=k8s.io Feb 13 19:11:28.449490 containerd[1444]: time="2025-02-13T19:11:28.449473055Z" level=warning msg="cleaning up after shim disconnected" id=7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45 namespace=k8s.io Feb 13 19:11:28.449545 containerd[1444]: time="2025-02-13T19:11:28.449533177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:28.463623 systemd[1]: cri-containerd-c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d.scope: Deactivated successfully. Feb 13 19:11:28.464043 systemd[1]: cri-containerd-c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d.scope: Consumed 6.457s CPU time. Feb 13 19:11:28.483422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d-rootfs.mount: Deactivated successfully. Feb 13 19:11:28.490440 containerd[1444]: time="2025-02-13T19:11:28.490351783Z" level=info msg="shim disconnected" id=c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d namespace=k8s.io Feb 13 19:11:28.490440 containerd[1444]: time="2025-02-13T19:11:28.490417865Z" level=warning msg="cleaning up after shim disconnected" id=c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d namespace=k8s.io Feb 13 19:11:28.490440 containerd[1444]: time="2025-02-13T19:11:28.490427145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:28.502512 containerd[1444]: time="2025-02-13T19:11:28.501456060Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:11:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:11:28.505553 containerd[1444]: time="2025-02-13T19:11:28.505516656Z" level=info msg="StopContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" returns successfully" Feb 13 19:11:28.505856 containerd[1444]: time="2025-02-13T19:11:28.505836305Z" level=info msg="StopContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" returns successfully" Feb 13 19:11:28.508204 containerd[1444]: time="2025-02-13T19:11:28.508161451Z" level=info msg="StopPodSandbox for \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\"" Feb 13 19:11:28.508322 containerd[1444]: time="2025-02-13T19:11:28.508305696Z" level=info msg="Container to stop \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.508443 containerd[1444]: time="2025-02-13T19:11:28.508311096Z" level=info msg="StopPodSandbox for \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\"" Feb 13 19:11:28.511783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91-shm.mount: Deactivated successfully. Feb 13 19:11:28.516335 systemd[1]: cri-containerd-05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91.scope: Deactivated successfully. Feb 13 19:11:28.521022 containerd[1444]: time="2025-02-13T19:11:28.520914936Z" level=info msg="Container to stop \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.521022 containerd[1444]: time="2025-02-13T19:11:28.520968177Z" level=info msg="Container to stop \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.521022 containerd[1444]: time="2025-02-13T19:11:28.520978018Z" level=info msg="Container to stop \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.521022 containerd[1444]: time="2025-02-13T19:11:28.520986898Z" level=info msg="Container to stop \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.521022 containerd[1444]: time="2025-02-13T19:11:28.520995698Z" level=info msg="Container to stop \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:11:28.523479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062-shm.mount: Deactivated successfully. Feb 13 19:11:28.528081 systemd[1]: cri-containerd-d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062.scope: Deactivated successfully. Feb 13 19:11:28.543785 containerd[1444]: time="2025-02-13T19:11:28.543716827Z" level=info msg="shim disconnected" id=05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91 namespace=k8s.io Feb 13 19:11:28.543785 containerd[1444]: time="2025-02-13T19:11:28.543771549Z" level=warning msg="cleaning up after shim disconnected" id=05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91 namespace=k8s.io Feb 13 19:11:28.543785 containerd[1444]: time="2025-02-13T19:11:28.543780629Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:28.557682 containerd[1444]: time="2025-02-13T19:11:28.557634185Z" level=info msg="TearDown network for sandbox \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\" successfully" Feb 13 19:11:28.557682 containerd[1444]: time="2025-02-13T19:11:28.557668466Z" level=info msg="StopPodSandbox for \"05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91\" returns successfully" Feb 13 19:11:28.557962 containerd[1444]: time="2025-02-13T19:11:28.557918313Z" level=info msg="shim disconnected" id=d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062 namespace=k8s.io Feb 13 19:11:28.557994 containerd[1444]: time="2025-02-13T19:11:28.557963434Z" level=warning msg="cleaning up after shim disconnected" id=d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062 namespace=k8s.io Feb 13 19:11:28.557994 containerd[1444]: time="2025-02-13T19:11:28.557972434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:28.572299 kubelet[2523]: I0213 19:11:28.572255 2523 scope.go:117] "RemoveContainer" containerID="7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45" Feb 13 19:11:28.573223 containerd[1444]: time="2025-02-13T19:11:28.573186669Z" level=info msg="TearDown network for sandbox \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" successfully" Feb 13 19:11:28.573223 containerd[1444]: time="2025-02-13T19:11:28.573222150Z" level=info msg="StopPodSandbox for \"d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062\" returns successfully" Feb 13 19:11:28.576187 containerd[1444]: time="2025-02-13T19:11:28.575965108Z" level=info msg="RemoveContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\"" Feb 13 19:11:28.578752 containerd[1444]: time="2025-02-13T19:11:28.578662985Z" level=info msg="RemoveContainer for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" returns successfully" Feb 13 19:11:28.580024 kubelet[2523]: I0213 19:11:28.579978 2523 scope.go:117] "RemoveContainer" containerID="7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45" Feb 13 19:11:28.581142 containerd[1444]: time="2025-02-13T19:11:28.580296272Z" level=error msg="ContainerStatus for \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\": not found" Feb 13 19:11:28.592498 kubelet[2523]: E0213 19:11:28.592336 2523 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\": not found" containerID="7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45" Feb 13 19:11:28.592498 kubelet[2523]: I0213 19:11:28.592390 2523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45"} err="failed to get container status \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e252999c2d9e3b3dac877be4cf1621ea6ebb580b42c365859f70a2ba48c1b45\": not found" Feb 13 19:11:28.596179 kubelet[2523]: I0213 19:11:28.596129 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fcrz\" (UniqueName: \"kubernetes.io/projected/53beb2b7-1c5b-4a76-a054-da7a423da2ba-kube-api-access-2fcrz\") pod \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\" (UID: \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\") " Feb 13 19:11:28.596264 kubelet[2523]: I0213 19:11:28.596183 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53beb2b7-1c5b-4a76-a054-da7a423da2ba-cilium-config-path\") pod \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\" (UID: \"53beb2b7-1c5b-4a76-a054-da7a423da2ba\") " Feb 13 19:11:28.602638 kubelet[2523]: I0213 19:11:28.602590 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53beb2b7-1c5b-4a76-a054-da7a423da2ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53beb2b7-1c5b-4a76-a054-da7a423da2ba" (UID: "53beb2b7-1c5b-4a76-a054-da7a423da2ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:11:28.612381 kubelet[2523]: I0213 19:11:28.612326 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53beb2b7-1c5b-4a76-a054-da7a423da2ba-kube-api-access-2fcrz" (OuterVolumeSpecName: "kube-api-access-2fcrz") pod "53beb2b7-1c5b-4a76-a054-da7a423da2ba" (UID: "53beb2b7-1c5b-4a76-a054-da7a423da2ba"). InnerVolumeSpecName "kube-api-access-2fcrz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:11:28.698273 kubelet[2523]: I0213 19:11:28.697092 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3311fe6e-28b9-4c53-9640-842a0d4502be-clustermesh-secrets\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.700554 kubelet[2523]: I0213 19:11:28.700520 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3311fe6e-28b9-4c53-9640-842a0d4502be-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:11:28.701977 kubelet[2523]: I0213 19:11:28.701948 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-hubble-tls\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702053 kubelet[2523]: I0213 19:11:28.701986 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-cgroup\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702053 kubelet[2523]: I0213 19:11:28.702007 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-etc-cni-netd\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702053 kubelet[2523]: I0213 19:11:28.702022 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-lib-modules\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702053 kubelet[2523]: I0213 19:11:28.702040 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw584\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-kube-api-access-zw584\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702058 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-hostproc\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702071 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cni-path\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702084 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-xtables-lock\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702100 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-config-path\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702113 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-bpf-maps\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702140 kubelet[2523]: I0213 19:11:28.702127 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-net\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702144 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-kernel\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702176 2523 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-run\") pod \"3311fe6e-28b9-4c53-9640-842a0d4502be\" (UID: \"3311fe6e-28b9-4c53-9640-842a0d4502be\") " Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702221 2523 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fcrz\" (UniqueName: \"kubernetes.io/projected/53beb2b7-1c5b-4a76-a054-da7a423da2ba-kube-api-access-2fcrz\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702232 2523 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53beb2b7-1c5b-4a76-a054-da7a423da2ba-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702240 2523 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3311fe6e-28b9-4c53-9640-842a0d4502be-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.702279 kubelet[2523]: I0213 19:11:28.702270 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702753 kubelet[2523]: I0213 19:11:28.702441 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cni-path" (OuterVolumeSpecName: "cni-path") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702753 kubelet[2523]: I0213 19:11:28.702474 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702753 kubelet[2523]: I0213 19:11:28.702456 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702753 kubelet[2523]: I0213 19:11:28.702490 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702753 kubelet[2523]: I0213 19:11:28.702504 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702972 kubelet[2523]: I0213 19:11:28.702513 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702972 kubelet[2523]: I0213 19:11:28.702592 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702972 kubelet[2523]: I0213 19:11:28.702636 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-hostproc" (OuterVolumeSpecName: "hostproc") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.702972 kubelet[2523]: I0213 19:11:28.702652 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:11:28.704419 kubelet[2523]: I0213 19:11:28.704384 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:11:28.704586 kubelet[2523]: I0213 19:11:28.704497 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:11:28.704586 kubelet[2523]: I0213 19:11:28.704550 2523 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-kube-api-access-zw584" (OuterVolumeSpecName: "kube-api-access-zw584") pod "3311fe6e-28b9-4c53-9640-842a0d4502be" (UID: "3311fe6e-28b9-4c53-9640-842a0d4502be"). InnerVolumeSpecName "kube-api-access-zw584". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802899 2523 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802937 2523 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802948 2523 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802956 2523 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802965 2523 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802973 2523 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802980 2523 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803046 kubelet[2523]: I0213 19:11:28.802987 2523 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803358 kubelet[2523]: I0213 19:11:28.802995 2523 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803358 kubelet[2523]: I0213 19:11:28.803002 2523 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zw584\" (UniqueName: \"kubernetes.io/projected/3311fe6e-28b9-4c53-9640-842a0d4502be-kube-api-access-zw584\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803358 kubelet[2523]: I0213 19:11:28.803011 2523 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803358 kubelet[2523]: I0213 19:11:28.803018 2523 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.803358 kubelet[2523]: I0213 19:11:28.803025 2523 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3311fe6e-28b9-4c53-9640-842a0d4502be-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:11:28.881222 systemd[1]: Removed slice kubepods-besteffort-pod53beb2b7_1c5b_4a76_a054_da7a423da2ba.slice - libcontainer container kubepods-besteffort-pod53beb2b7_1c5b_4a76_a054_da7a423da2ba.slice. Feb 13 19:11:29.415463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05424278ba1609d864f4c50835ff7290313cace2fffd624318780e735975fc91-rootfs.mount: Deactivated successfully. Feb 13 19:11:29.415564 systemd[1]: var-lib-kubelet-pods-53beb2b7\x2d1c5b\x2d4a76\x2da054\x2dda7a423da2ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fcrz.mount: Deactivated successfully. Feb 13 19:11:29.415626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d532dd95161d8f38937a7eb212c15b0ca3f4d31326afa20758271e6d0c5e0062-rootfs.mount: Deactivated successfully. Feb 13 19:11:29.415671 systemd[1]: var-lib-kubelet-pods-3311fe6e\x2d28b9\x2d4c53\x2d9640\x2d842a0d4502be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzw584.mount: Deactivated successfully. Feb 13 19:11:29.415724 systemd[1]: var-lib-kubelet-pods-3311fe6e\x2d28b9\x2d4c53\x2d9640\x2d842a0d4502be-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:11:29.415777 systemd[1]: var-lib-kubelet-pods-3311fe6e\x2d28b9\x2d4c53\x2d9640\x2d842a0d4502be-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:11:29.587348 kubelet[2523]: I0213 19:11:29.587126 2523 scope.go:117] "RemoveContainer" containerID="c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d" Feb 13 19:11:29.598412 containerd[1444]: time="2025-02-13T19:11:29.598376172Z" level=info msg="RemoveContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\"" Feb 13 19:11:29.600622 systemd[1]: Removed slice kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice - libcontainer container kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice. Feb 13 19:11:29.600712 systemd[1]: kubepods-burstable-pod3311fe6e_28b9_4c53_9640_842a0d4502be.slice: Consumed 6.645s CPU time. Feb 13 19:11:29.608090 containerd[1444]: time="2025-02-13T19:11:29.608039522Z" level=info msg="RemoveContainer for \"c10016ebdd21e8a2099cdc190f7d68b9b514aef5ac6cdf77535b70505efba65d\" returns successfully" Feb 13 19:11:29.609426 kubelet[2523]: I0213 19:11:29.608383 2523 scope.go:117] "RemoveContainer" containerID="f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b" Feb 13 19:11:29.614781 containerd[1444]: time="2025-02-13T19:11:29.614694307Z" level=info msg="RemoveContainer for \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\"" Feb 13 19:11:29.621939 containerd[1444]: time="2025-02-13T19:11:29.621873708Z" level=info msg="RemoveContainer for \"f59740f215f06a7a59a2ffc33d979afecde82c9b73dcafea096f26ca9b6ac21b\" returns successfully" Feb 13 19:11:29.622145 kubelet[2523]: I0213 19:11:29.622107 2523 scope.go:117] "RemoveContainer" containerID="653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08" Feb 13 19:11:29.623256 containerd[1444]: time="2025-02-13T19:11:29.623218265Z" level=info msg="RemoveContainer for \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\"" Feb 13 19:11:29.625959 containerd[1444]: time="2025-02-13T19:11:29.625908141Z" level=info msg="RemoveContainer for \"653924421c227ec11e4f889f23e5509f03e1a536106334fda581b80b52a45f08\" returns successfully" Feb 13 19:11:29.626168 kubelet[2523]: I0213 19:11:29.626099 2523 scope.go:117] "RemoveContainer" containerID="f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621" Feb 13 19:11:29.627538 containerd[1444]: time="2025-02-13T19:11:29.627499825Z" level=info msg="RemoveContainer for \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\"" Feb 13 19:11:29.630168 containerd[1444]: time="2025-02-13T19:11:29.630125698Z" level=info msg="RemoveContainer for \"f953dc178d3c835b93d382b1840cc91804747febab68e8685b416ac04fe78621\" returns successfully" Feb 13 19:11:29.630584 kubelet[2523]: I0213 19:11:29.630547 2523 scope.go:117] "RemoveContainer" containerID="04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b" Feb 13 19:11:29.631423 containerd[1444]: time="2025-02-13T19:11:29.631340132Z" level=info msg="RemoveContainer for \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\"" Feb 13 19:11:29.636163 containerd[1444]: time="2025-02-13T19:11:29.633625876Z" level=info msg="RemoveContainer for \"04ac609294350f762df65b69a75ebc5d75adb2fe537f6a8b8a64b460836ef74b\" returns successfully" Feb 13 19:11:30.344564 sshd[4167]: Connection closed by 10.0.0.1 port 38180 Feb 13 19:11:30.345518 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:30.353315 kubelet[2523]: I0213 19:11:30.352787 2523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3311fe6e-28b9-4c53-9640-842a0d4502be" path="/var/lib/kubelet/pods/3311fe6e-28b9-4c53-9640-842a0d4502be/volumes" Feb 13 19:11:30.353393 kubelet[2523]: I0213 19:11:30.353346 2523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53beb2b7-1c5b-4a76-a054-da7a423da2ba" path="/var/lib/kubelet/pods/53beb2b7-1c5b-4a76-a054-da7a423da2ba/volumes" Feb 13 19:11:30.353389 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:38180.service: Deactivated successfully. Feb 13 19:11:30.355561 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:11:30.359714 systemd-logind[1419]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:11:30.374429 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:38184.service - OpenSSH per-connection server daemon (10.0.0.1:38184). Feb 13 19:11:30.376433 systemd-logind[1419]: Removed session 23. Feb 13 19:11:30.417995 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 38184 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:30.419399 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:30.425074 systemd-logind[1419]: New session 24 of user core. Feb 13 19:11:30.434278 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:11:31.398265 sshd[4326]: Connection closed by 10.0.0.1 port 38184 Feb 13 19:11:31.399381 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:31.408722 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:38184.service: Deactivated successfully. Feb 13 19:11:31.410079 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:11:31.410513 kubelet[2523]: E0213 19:11:31.410132 2523 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:11:31.413376 kubelet[2523]: I0213 19:11:31.412965 2523 memory_manager.go:355] "RemoveStaleState removing state" podUID="53beb2b7-1c5b-4a76-a054-da7a423da2ba" containerName="cilium-operator" Feb 13 19:11:31.413376 kubelet[2523]: I0213 19:11:31.412995 2523 memory_manager.go:355] "RemoveStaleState removing state" podUID="3311fe6e-28b9-4c53-9640-842a0d4502be" containerName="cilium-agent" Feb 13 19:11:31.415385 systemd-logind[1419]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:11:31.426777 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:38196.service - OpenSSH per-connection server daemon (10.0.0.1:38196). Feb 13 19:11:31.434893 systemd-logind[1419]: Removed session 24. Feb 13 19:11:31.448203 systemd[1]: Created slice kubepods-burstable-pod7f2a53c9_1872_499e_ab8e_9251bb82322b.slice - libcontainer container kubepods-burstable-pod7f2a53c9_1872_499e_ab8e_9251bb82322b.slice. Feb 13 19:11:31.484442 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 38196 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:31.485908 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:31.490406 systemd-logind[1419]: New session 25 of user core. Feb 13 19:11:31.501321 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518169 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-lib-modules\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518208 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f2a53c9-1872-499e-ab8e-9251bb82322b-hubble-tls\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518232 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-host-proc-sys-kernel\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518264 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f2a53c9-1872-499e-ab8e-9251bb82322b-clustermesh-secrets\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518295 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-bpf-maps\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518521 kubelet[2523]: I0213 19:11:31.518309 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-hostproc\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518757 kubelet[2523]: I0213 19:11:31.518325 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-xtables-lock\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518757 kubelet[2523]: I0213 19:11:31.518351 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2a53c9-1872-499e-ab8e-9251bb82322b-cilium-config-path\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518757 kubelet[2523]: I0213 19:11:31.518374 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr799\" (UniqueName: \"kubernetes.io/projected/7f2a53c9-1872-499e-ab8e-9251bb82322b-kube-api-access-tr799\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518757 kubelet[2523]: I0213 19:11:31.518392 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-cilium-run\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518757 kubelet[2523]: I0213 19:11:31.518408 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7f2a53c9-1872-499e-ab8e-9251bb82322b-cilium-ipsec-secrets\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518869 kubelet[2523]: I0213 19:11:31.518424 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-cilium-cgroup\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518869 kubelet[2523]: I0213 19:11:31.518439 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-cni-path\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518869 kubelet[2523]: I0213 19:11:31.518458 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-etc-cni-netd\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.518869 kubelet[2523]: I0213 19:11:31.518474 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f2a53c9-1872-499e-ab8e-9251bb82322b-host-proc-sys-net\") pod \"cilium-cckm7\" (UID: \"7f2a53c9-1872-499e-ab8e-9251bb82322b\") " pod="kube-system/cilium-cckm7" Feb 13 19:11:31.553128 sshd[4339]: Connection closed by 10.0.0.1 port 38196 Feb 13 19:11:31.553648 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:31.566956 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:38196.service: Deactivated successfully. Feb 13 19:11:31.568985 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:11:31.570536 systemd-logind[1419]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:11:31.575492 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Feb 13 19:11:31.577264 systemd-logind[1419]: Removed session 25. Feb 13 19:11:31.616133 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:11:31.616717 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:11:31.622891 systemd-logind[1419]: New session 26 of user core. Feb 13 19:11:31.636174 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:11:31.753224 kubelet[2523]: E0213 19:11:31.752918 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:31.755453 containerd[1444]: time="2025-02-13T19:11:31.754878606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cckm7,Uid:7f2a53c9-1872-499e-ab8e-9251bb82322b,Namespace:kube-system,Attempt:0,}" Feb 13 19:11:31.781219 containerd[1444]: time="2025-02-13T19:11:31.781109026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:11:31.781219 containerd[1444]: time="2025-02-13T19:11:31.781200348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:11:31.781219 containerd[1444]: time="2025-02-13T19:11:31.781216629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:11:31.781385 containerd[1444]: time="2025-02-13T19:11:31.781310111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:11:31.804334 systemd[1]: Started cri-containerd-a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8.scope - libcontainer container a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8. Feb 13 19:11:31.825264 containerd[1444]: time="2025-02-13T19:11:31.825216124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cckm7,Uid:7f2a53c9-1872-499e-ab8e-9251bb82322b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\"" Feb 13 19:11:31.826178 kubelet[2523]: E0213 19:11:31.826107 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:31.828535 containerd[1444]: time="2025-02-13T19:11:31.828498771Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:11:31.839956 containerd[1444]: time="2025-02-13T19:11:31.839818153Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a\"" Feb 13 19:11:31.840551 containerd[1444]: time="2025-02-13T19:11:31.840522132Z" level=info msg="StartContainer for \"b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a\"" Feb 13 19:11:31.873349 systemd[1]: Started cri-containerd-b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a.scope - libcontainer container b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a. Feb 13 19:11:31.897068 containerd[1444]: time="2025-02-13T19:11:31.897025281Z" level=info msg="StartContainer for \"b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a\" returns successfully" Feb 13 19:11:31.964079 systemd[1]: cri-containerd-b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a.scope: Deactivated successfully. Feb 13 19:11:31.993011 containerd[1444]: time="2025-02-13T19:11:31.992952762Z" level=info msg="shim disconnected" id=b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a namespace=k8s.io Feb 13 19:11:31.993011 containerd[1444]: time="2025-02-13T19:11:31.993005843Z" level=warning msg="cleaning up after shim disconnected" id=b6b8d18b24a6c4ade69f7a9784fcfcd0f1272b01de7c0dab110420b38f4df42a namespace=k8s.io Feb 13 19:11:31.993307 containerd[1444]: time="2025-02-13T19:11:31.993015084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:32.608439 kubelet[2523]: E0213 19:11:32.608391 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:32.611086 containerd[1444]: time="2025-02-13T19:11:32.611046508Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:11:32.643048 containerd[1444]: time="2025-02-13T19:11:32.642920701Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77\"" Feb 13 19:11:32.645830 containerd[1444]: time="2025-02-13T19:11:32.645084157Z" level=info msg="StartContainer for \"7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77\"" Feb 13 19:11:32.690399 systemd[1]: Started cri-containerd-7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77.scope - libcontainer container 7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77. Feb 13 19:11:32.725753 systemd[1]: cri-containerd-7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77.scope: Deactivated successfully. Feb 13 19:11:32.727253 containerd[1444]: time="2025-02-13T19:11:32.727210942Z" level=info msg="StartContainer for \"7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77\" returns successfully" Feb 13 19:11:32.747702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77-rootfs.mount: Deactivated successfully. Feb 13 19:11:32.757583 containerd[1444]: time="2025-02-13T19:11:32.757519774Z" level=info msg="shim disconnected" id=7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77 namespace=k8s.io Feb 13 19:11:32.758131 containerd[1444]: time="2025-02-13T19:11:32.757965865Z" level=warning msg="cleaning up after shim disconnected" id=7273c4578f6e821e434f023a4089e2424e91e6ef6ad060d2b4de948d5badde77 namespace=k8s.io Feb 13 19:11:32.758131 containerd[1444]: time="2025-02-13T19:11:32.757990026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:33.611524 kubelet[2523]: E0213 19:11:33.611462 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:33.612979 containerd[1444]: time="2025-02-13T19:11:33.612869685Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:11:33.631821 containerd[1444]: time="2025-02-13T19:11:33.631709927Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841\"" Feb 13 19:11:33.632192 containerd[1444]: time="2025-02-13T19:11:33.632165178Z" level=info msg="StartContainer for \"978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841\"" Feb 13 19:11:33.654739 systemd[1]: run-containerd-runc-k8s.io-978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841-runc.sNj1tu.mount: Deactivated successfully. Feb 13 19:11:33.666341 systemd[1]: Started cri-containerd-978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841.scope - libcontainer container 978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841. Feb 13 19:11:33.697402 containerd[1444]: time="2025-02-13T19:11:33.697320723Z" level=info msg="StartContainer for \"978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841\" returns successfully" Feb 13 19:11:33.698122 systemd[1]: cri-containerd-978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841.scope: Deactivated successfully. Feb 13 19:11:33.715929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841-rootfs.mount: Deactivated successfully. Feb 13 19:11:33.722311 containerd[1444]: time="2025-02-13T19:11:33.722228239Z" level=info msg="shim disconnected" id=978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841 namespace=k8s.io Feb 13 19:11:33.722311 containerd[1444]: time="2025-02-13T19:11:33.722299081Z" level=warning msg="cleaning up after shim disconnected" id=978785cdbe0f40195d01abaf45745180408102115f2a183a3e96dbfebbcc5841 namespace=k8s.io Feb 13 19:11:33.722311 containerd[1444]: time="2025-02-13T19:11:33.722309761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:33.732086 containerd[1444]: time="2025-02-13T19:11:33.732040050Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:11:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:11:34.615864 kubelet[2523]: E0213 19:11:34.615836 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:34.618192 containerd[1444]: time="2025-02-13T19:11:34.618158352Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:11:34.633092 containerd[1444]: time="2025-02-13T19:11:34.633021644Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8\"" Feb 13 19:11:34.633887 containerd[1444]: time="2025-02-13T19:11:34.633844424Z" level=info msg="StartContainer for \"ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8\"" Feb 13 19:11:34.669348 systemd[1]: Started cri-containerd-ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8.scope - libcontainer container ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8. Feb 13 19:11:34.689920 systemd[1]: cri-containerd-ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8.scope: Deactivated successfully. Feb 13 19:11:34.691504 containerd[1444]: time="2025-02-13T19:11:34.691460305Z" level=info msg="StartContainer for \"ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8\" returns successfully" Feb 13 19:11:34.714446 containerd[1444]: time="2025-02-13T19:11:34.714379398Z" level=info msg="shim disconnected" id=ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8 namespace=k8s.io Feb 13 19:11:34.714446 containerd[1444]: time="2025-02-13T19:11:34.714431559Z" level=warning msg="cleaning up after shim disconnected" id=ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8 namespace=k8s.io Feb 13 19:11:34.714446 containerd[1444]: time="2025-02-13T19:11:34.714440759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:11:35.620176 kubelet[2523]: E0213 19:11:35.619988 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:35.623316 containerd[1444]: time="2025-02-13T19:11:35.623222350Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:11:35.628105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd9e0966f21bb9bea32e2fc4c8c7bfefcb553c65a1f3851e639c41858496af8-rootfs.mount: Deactivated successfully. Feb 13 19:11:35.642218 containerd[1444]: time="2025-02-13T19:11:35.642134773Z" level=info msg="CreateContainer within sandbox \"a8cf65cdd9e404b11ff3295f64ced760e98085887bb42dbcf2302d18036909c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c2b48e0def45c12be3b4e1799fe82c4b80fcfe5e8b669963b71d605ed84aa5e\"" Feb 13 19:11:35.643730 containerd[1444]: time="2025-02-13T19:11:35.642957433Z" level=info msg="StartContainer for \"9c2b48e0def45c12be3b4e1799fe82c4b80fcfe5e8b669963b71d605ed84aa5e\"" Feb 13 19:11:35.675309 systemd[1]: Started cri-containerd-9c2b48e0def45c12be3b4e1799fe82c4b80fcfe5e8b669963b71d605ed84aa5e.scope - libcontainer container 9c2b48e0def45c12be3b4e1799fe82c4b80fcfe5e8b669963b71d605ed84aa5e. Feb 13 19:11:35.699878 containerd[1444]: time="2025-02-13T19:11:35.699839665Z" level=info msg="StartContainer for \"9c2b48e0def45c12be3b4e1799fe82c4b80fcfe5e8b669963b71d605ed84aa5e\" returns successfully" Feb 13 19:11:35.949183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:11:36.626946 kubelet[2523]: E0213 19:11:36.624991 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:37.349681 kubelet[2523]: E0213 19:11:37.349648 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:37.755119 kubelet[2523]: E0213 19:11:37.755010 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:38.350592 kubelet[2523]: E0213 19:11:38.350005 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:38.888570 systemd-networkd[1359]: lxc_health: Link UP Feb 13 19:11:38.905065 systemd-networkd[1359]: lxc_health: Gained carrier Feb 13 19:11:39.756065 kubelet[2523]: E0213 19:11:39.756021 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:39.777709 kubelet[2523]: I0213 19:11:39.777640 2523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cckm7" podStartSLOduration=8.777623102 podStartE2EDuration="8.777623102s" podCreationTimestamp="2025-02-13 19:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:11:36.640422352 +0000 UTC m=+80.379979439" watchObservedRunningTime="2025-02-13 19:11:39.777623102 +0000 UTC m=+83.517180149" Feb 13 19:11:40.254343 systemd-networkd[1359]: lxc_health: Gained IPv6LL Feb 13 19:11:40.631025 kubelet[2523]: E0213 19:11:40.630918 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:41.632896 kubelet[2523]: E0213 19:11:41.632850 2523 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:11:44.522192 sshd[4351]: Connection closed by 10.0.0.1 port 38198 Feb 13 19:11:44.522215 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Feb 13 19:11:44.525514 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:38198.service: Deactivated successfully. Feb 13 19:11:44.528940 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:11:44.530389 systemd-logind[1419]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:11:44.531481 systemd-logind[1419]: Removed session 26.