Jan 30 13:02:46.970384 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:02:46.970408 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:02:46.970419 kernel: KASLR enabled Jan 30 13:02:46.970424 kernel: efi: EFI v2.7 by EDK II Jan 30 13:02:46.970430 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:02:46.970436 kernel: random: crng init done Jan 30 13:02:46.970443 kernel: secureboot: Secure boot disabled Jan 30 13:02:46.970449 kernel: ACPI: Early table checksum verification disabled Jan 30 13:02:46.970455 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:02:46.970463 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:02:46.970469 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970475 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970481 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970487 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970494 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970502 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970508 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970514 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970521 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:02:46.970527 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:02:46.970533 kernel: NUMA: Failed to initialise from firmware Jan 30 13:02:46.970539 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:46.970545 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:02:46.970552 kernel: Zone ranges: Jan 30 13:02:46.970558 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:46.970566 kernel: DMA32 empty Jan 30 13:02:46.970572 kernel: Normal empty Jan 30 13:02:46.970578 kernel: Movable zone start for each node Jan 30 13:02:46.970584 kernel: Early memory node ranges Jan 30 13:02:46.970590 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:02:46.970597 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:02:46.970603 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:02:46.970609 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:02:46.970615 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:02:46.970622 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:02:46.970628 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:02:46.970635 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:02:46.970642 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:02:46.970649 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:02:46.970656 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:02:46.970665 kernel: psci: probing for conduit method from ACPI. Jan 30 13:02:46.970672 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:02:46.970678 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:02:46.970686 kernel: psci: Trusted OS migration not required Jan 30 13:02:46.970693 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:02:46.970700 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:02:46.970706 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:02:46.970714 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:02:46.970724 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:02:46.970732 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:02:46.970739 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:02:46.970746 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:02:46.970753 kernel: CPU features: detected: Spectre-v4 Jan 30 13:02:46.970761 kernel: CPU features: detected: Spectre-BHB Jan 30 13:02:46.970767 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:02:46.970774 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:02:46.970781 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:02:46.970788 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:02:46.970794 kernel: alternatives: applying boot alternatives Jan 30 13:02:46.970802 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:02:46.970809 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:02:46.970816 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:02:46.970822 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:02:46.970829 kernel: Fallback order for Node 0: 0 Jan 30 13:02:46.970837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:02:46.970844 kernel: Policy zone: DMA Jan 30 13:02:46.970850 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:02:46.970857 kernel: software IO TLB: area num 4. Jan 30 13:02:46.970863 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:02:46.970870 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 30 13:02:46.970877 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:02:46.970884 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:02:46.970891 kernel: rcu: RCU event tracing is enabled. Jan 30 13:02:46.970908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:02:46.970916 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:02:46.970922 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:02:46.970931 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:02:46.970938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:02:46.970945 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:02:46.970951 kernel: GICv3: 256 SPIs implemented Jan 30 13:02:46.970958 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:02:46.970964 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:02:46.970971 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:02:46.970978 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:02:46.970984 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:02:46.970991 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:02:46.970998 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:02:46.971006 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:02:46.971013 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:02:46.971019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:02:46.971026 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:46.971033 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:02:46.971039 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:02:46.971046 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:02:46.971053 kernel: arm-pv: using stolen time PV Jan 30 13:02:46.971060 kernel: Console: colour dummy device 80x25 Jan 30 13:02:46.971067 kernel: ACPI: Core revision 20230628 Jan 30 13:02:46.971074 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:02:46.971083 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:02:46.971089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:02:46.971096 kernel: landlock: Up and running. Jan 30 13:02:46.971103 kernel: SELinux: Initializing. Jan 30 13:02:46.971110 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:02:46.971117 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:02:46.971123 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:02:46.971130 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:02:46.971137 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:02:46.971146 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:02:46.971152 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:02:46.971159 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:02:46.971166 kernel: Remapping and enabling EFI services. Jan 30 13:02:46.971173 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:02:46.971180 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:02:46.971187 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:02:46.971193 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:02:46.971200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:46.971208 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:02:46.971215 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:02:46.971227 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:02:46.971237 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:02:46.971245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:46.971251 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:02:46.971258 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:02:46.971266 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:02:46.971273 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:02:46.971281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:02:46.971288 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:02:46.971296 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:02:46.971310 kernel: SMP: Total of 4 processors activated. Jan 30 13:02:46.971317 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:02:46.971324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:02:46.971332 kernel: CPU features: detected: Common not Private translations Jan 30 13:02:46.971339 kernel: CPU features: detected: CRC32 instructions Jan 30 13:02:46.971348 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:02:46.971355 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:02:46.971362 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:02:46.971370 kernel: CPU features: detected: Privileged Access Never Jan 30 13:02:46.971377 kernel: CPU features: detected: RAS Extension Support Jan 30 13:02:46.971384 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:02:46.971391 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:02:46.971398 kernel: alternatives: applying system-wide alternatives Jan 30 13:02:46.971405 kernel: devtmpfs: initialized Jan 30 13:02:46.971412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:02:46.971421 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:02:46.971428 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:02:46.971435 kernel: SMBIOS 3.0.0 present. Jan 30 13:02:46.971443 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:02:46.971450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:02:46.971457 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:02:46.971464 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:02:46.971471 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:02:46.971479 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:02:46.971487 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Jan 30 13:02:46.971494 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:02:46.971501 kernel: cpuidle: using governor menu Jan 30 13:02:46.971508 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:02:46.971515 kernel: ASID allocator initialised with 32768 entries Jan 30 13:02:46.971522 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:02:46.971529 kernel: Serial: AMBA PL011 UART driver Jan 30 13:02:46.971536 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:02:46.971545 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:02:46.971552 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:02:46.971559 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:02:46.971566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:02:46.971574 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:02:46.971581 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:02:46.971588 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:02:46.971595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:02:46.971602 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:02:46.971611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:02:46.971618 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:02:46.971625 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:02:46.971632 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:02:46.971639 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:02:46.971646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:02:46.971653 kernel: ACPI: Interpreter enabled Jan 30 13:02:46.971660 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:02:46.971667 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:02:46.971674 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:02:46.971683 kernel: printk: console [ttyAMA0] enabled Jan 30 13:02:46.971690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:02:46.971833 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:02:46.971990 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:02:46.972068 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:02:46.972136 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:02:46.972203 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:02:46.972216 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:02:46.972224 kernel: PCI host bridge to bus 0000:00 Jan 30 13:02:46.972325 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:02:46.972395 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:02:46.972458 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:02:46.972519 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:02:46.972604 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:02:46.972692 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:02:46.972764 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:02:46.972832 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:02:46.972920 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:02:46.972995 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:02:46.973070 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:02:46.973150 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:02:46.973215 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:02:46.973275 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:02:46.973342 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:02:46.973353 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:02:46.973360 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:02:46.973368 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:02:46.973375 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:02:46.973385 kernel: iommu: Default domain type: Translated Jan 30 13:02:46.973392 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:02:46.973400 kernel: efivars: Registered efivars operations Jan 30 13:02:46.973407 kernel: vgaarb: loaded Jan 30 13:02:46.973415 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:02:46.973422 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:02:46.973429 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:02:46.973436 kernel: pnp: PnP ACPI init Jan 30 13:02:46.973511 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:02:46.973523 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:02:46.973531 kernel: NET: Registered PF_INET protocol family Jan 30 13:02:46.973538 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:02:46.973546 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:02:46.973553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:02:46.973560 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:02:46.973567 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:02:46.973575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:02:46.973583 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:02:46.973591 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:02:46.973598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:02:46.973605 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:02:46.973613 kernel: kvm [1]: HYP mode not available Jan 30 13:02:46.973620 kernel: Initialise system trusted keyrings Jan 30 13:02:46.973627 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:02:46.973635 kernel: Key type asymmetric registered Jan 30 13:02:46.973642 kernel: Asymmetric key parser 'x509' registered Jan 30 13:02:46.973651 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:02:46.973659 kernel: io scheduler mq-deadline registered Jan 30 13:02:46.973666 kernel: io scheduler kyber registered Jan 30 13:02:46.973673 kernel: io scheduler bfq registered Jan 30 13:02:46.973681 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:02:46.973688 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:02:46.973698 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:02:46.973776 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:02:46.973786 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:02:46.973795 kernel: thunder_xcv, ver 1.0 Jan 30 13:02:46.973802 kernel: thunder_bgx, ver 1.0 Jan 30 13:02:46.973810 kernel: nicpf, ver 1.0 Jan 30 13:02:46.973817 kernel: nicvf, ver 1.0 Jan 30 13:02:46.973897 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:02:46.973977 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:02:46 UTC (1738242166) Jan 30 13:02:46.973988 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:02:46.973995 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:02:46.974002 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:02:46.974013 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:02:46.974020 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:02:46.974027 kernel: Segment Routing with IPv6 Jan 30 13:02:46.974034 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:02:46.974041 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:02:46.974049 kernel: Key type dns_resolver registered Jan 30 13:02:46.974056 kernel: registered taskstats version 1 Jan 30 13:02:46.974063 kernel: Loading compiled-in X.509 certificates Jan 30 13:02:46.974071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:02:46.974080 kernel: Key type .fscrypt registered Jan 30 13:02:46.974090 kernel: Key type fscrypt-provisioning registered Jan 30 13:02:46.974099 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:02:46.974110 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:02:46.974118 kernel: ima: No architecture policies found Jan 30 13:02:46.974127 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:02:46.974134 kernel: clk: Disabling unused clocks Jan 30 13:02:46.974142 kernel: Freeing unused kernel memory: 39936K Jan 30 13:02:46.974153 kernel: Run /init as init process Jan 30 13:02:46.974160 kernel: with arguments: Jan 30 13:02:46.974167 kernel: /init Jan 30 13:02:46.974174 kernel: with environment: Jan 30 13:02:46.974181 kernel: HOME=/ Jan 30 13:02:46.974189 kernel: TERM=linux Jan 30 13:02:46.974198 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:02:46.974212 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:02:46.974223 systemd[1]: Detected virtualization kvm. Jan 30 13:02:46.974232 systemd[1]: Detected architecture arm64. Jan 30 13:02:46.974239 systemd[1]: Running in initrd. Jan 30 13:02:46.974247 systemd[1]: No hostname configured, using default hostname. Jan 30 13:02:46.974254 systemd[1]: Hostname set to . Jan 30 13:02:46.974262 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:02:46.974270 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:02:46.974278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:46.974288 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:46.974302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:02:46.974312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:02:46.974320 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:02:46.974328 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:02:46.974338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:02:46.974346 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:02:46.974355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:46.974363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:46.974371 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:02:46.974379 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:02:46.974387 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:02:46.974395 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:02:46.974403 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:02:46.974411 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:02:46.974419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:02:46.974428 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:02:46.974436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:46.974444 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:46.974452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:46.974460 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:02:46.974467 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:02:46.974475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:02:46.974483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:02:46.974492 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:02:46.974500 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:02:46.974508 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:02:46.974516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:46.974524 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:02:46.974532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:46.974540 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:02:46.974575 systemd-journald[237]: Collecting audit messages is disabled. Jan 30 13:02:46.974596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:02:46.974607 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:46.974615 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:46.974624 systemd-journald[237]: Journal started Jan 30 13:02:46.974648 systemd-journald[237]: Runtime Journal (/run/log/journal/c5546b93c68d40ddb02fd15249089e78) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:02:46.951630 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 13:02:46.978239 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:02:46.978261 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:02:46.979440 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:46.981636 kernel: Bridge firewalling registered Jan 30 13:02:46.979608 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 13:02:46.980771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:46.986181 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:02:46.989874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:02:46.993256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:02:46.999254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:47.001341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:47.003198 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:47.005370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:47.016113 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:02:47.018490 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:02:47.027221 dracut-cmdline[275]: dracut-dracut-053 Jan 30 13:02:47.029942 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:02:47.048454 systemd-resolved[277]: Positive Trust Anchors: Jan 30 13:02:47.048474 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:02:47.048505 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:02:47.053444 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 30 13:02:47.054663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:02:47.057675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:47.100976 kernel: SCSI subsystem initialized Jan 30 13:02:47.105939 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:02:47.113948 kernel: iscsi: registered transport (tcp) Jan 30 13:02:47.128218 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:02:47.128305 kernel: QLogic iSCSI HBA Driver Jan 30 13:02:47.173381 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:02:47.185112 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:02:47.202545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:02:47.202621 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:02:47.202633 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:02:47.249935 kernel: raid6: neonx8 gen() 14677 MB/s Jan 30 13:02:47.266927 kernel: raid6: neonx4 gen() 15732 MB/s Jan 30 13:02:47.283926 kernel: raid6: neonx2 gen() 13189 MB/s Jan 30 13:02:47.300983 kernel: raid6: neonx1 gen() 10437 MB/s Jan 30 13:02:47.317950 kernel: raid6: int64x8 gen() 6785 MB/s Jan 30 13:02:47.334925 kernel: raid6: int64x4 gen() 7344 MB/s Jan 30 13:02:47.351923 kernel: raid6: int64x2 gen() 6092 MB/s Jan 30 13:02:47.368921 kernel: raid6: int64x1 gen() 5047 MB/s Jan 30 13:02:47.368973 kernel: raid6: using algorithm neonx4 gen() 15732 MB/s Jan 30 13:02:47.385924 kernel: raid6: .... xor() 12364 MB/s, rmw enabled Jan 30 13:02:47.385980 kernel: raid6: using neon recovery algorithm Jan 30 13:02:47.393028 kernel: xor: measuring software checksum speed Jan 30 13:02:47.393067 kernel: 8regs : 21630 MB/sec Jan 30 13:02:47.394096 kernel: 32regs : 21693 MB/sec Jan 30 13:02:47.394117 kernel: arm64_neon : 27823 MB/sec Jan 30 13:02:47.394127 kernel: xor: using function: arm64_neon (27823 MB/sec) Jan 30 13:02:47.444132 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:02:47.456669 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:02:47.467136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:47.480228 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 30 13:02:47.483447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:47.489143 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:02:47.501580 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 30 13:02:47.530454 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:02:47.536090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:02:47.580950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:47.595037 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:02:47.604486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:02:47.607078 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:02:47.608367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:47.611385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:02:47.623134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:02:47.631220 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:02:47.637842 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:02:47.637965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:02:47.637977 kernel: GPT:9289727 != 19775487 Jan 30 13:02:47.637986 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:02:47.637995 kernel: GPT:9289727 != 19775487 Jan 30 13:02:47.638011 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:02:47.638021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:47.633328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:02:47.633447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:47.638004 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:47.638767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:02:47.638910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:47.640687 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:47.652179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:47.655124 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:02:47.657822 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Jan 30 13:02:47.662918 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (518) Jan 30 13:02:47.667216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:02:47.668415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:47.677169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:02:47.686746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:02:47.690396 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:02:47.691337 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:02:47.708077 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:02:47.710130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:02:47.717339 disk-uuid[552]: Primary Header is updated. Jan 30 13:02:47.717339 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:02:47.717339 disk-uuid[552]: Secondary Header is updated. Jan 30 13:02:47.720508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:47.733065 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:48.731923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:02:48.732816 disk-uuid[555]: The operation has completed successfully. Jan 30 13:02:48.758063 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:02:48.758186 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:02:48.779169 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:02:48.783847 sh[572]: Success Jan 30 13:02:48.805939 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:02:48.842525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:02:48.856389 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:02:48.860968 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:02:48.878150 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:02:48.878206 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:48.878217 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:02:48.879027 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:02:48.880190 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:02:48.884479 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:02:48.886100 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:02:48.898104 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:02:48.899827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:02:48.910209 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:48.910276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:48.910289 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:48.913949 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:48.922700 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:02:48.924932 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:48.932124 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:02:48.941103 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:02:49.011686 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:02:49.031146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:02:49.057105 systemd-networkd[759]: lo: Link UP Jan 30 13:02:49.057119 systemd-networkd[759]: lo: Gained carrier Jan 30 13:02:49.058154 systemd-networkd[759]: Enumeration completed Jan 30 13:02:49.058706 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:49.058709 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:02:49.062183 ignition[661]: Ignition 2.20.0 Jan 30 13:02:49.059921 systemd-networkd[759]: eth0: Link UP Jan 30 13:02:49.062190 ignition[661]: Stage: fetch-offline Jan 30 13:02:49.059924 systemd-networkd[759]: eth0: Gained carrier Jan 30 13:02:49.062238 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:49.059933 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:49.062247 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:49.061296 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:02:49.062410 ignition[661]: parsed url from cmdline: "" Jan 30 13:02:49.062570 systemd[1]: Reached target network.target - Network. Jan 30 13:02:49.062413 ignition[661]: no config URL provided Jan 30 13:02:49.062418 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:02:49.062425 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:02:49.062454 ignition[661]: op(1): [started] loading QEMU firmware config module Jan 30 13:02:49.062459 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:02:49.085996 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:02:49.086752 ignition[661]: op(1): [finished] loading QEMU firmware config module Jan 30 13:02:49.127516 ignition[661]: parsing config with SHA512: 859259a5ff83eb823507c71889ab7e6c5e9c12b48fa440dae54b6810f61573bac5a5a6922ea4ca73fd6bc30cd8b89879953ca1d133fe347bf4edb6d1e0c2edb1 Jan 30 13:02:49.134814 unknown[661]: fetched base config from "system" Jan 30 13:02:49.134826 unknown[661]: fetched user config from "qemu" Jan 30 13:02:49.135296 ignition[661]: fetch-offline: fetch-offline passed Jan 30 13:02:49.135387 ignition[661]: Ignition finished successfully Jan 30 13:02:49.138961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:02:49.140230 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:02:49.154199 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:02:49.167272 ignition[770]: Ignition 2.20.0 Jan 30 13:02:49.167284 ignition[770]: Stage: kargs Jan 30 13:02:49.167504 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:49.167515 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:49.168568 ignition[770]: kargs: kargs passed Jan 30 13:02:49.168618 ignition[770]: Ignition finished successfully Jan 30 13:02:49.171386 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:02:49.185197 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:02:49.197258 ignition[779]: Ignition 2.20.0 Jan 30 13:02:49.197270 ignition[779]: Stage: disks Jan 30 13:02:49.197509 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:49.197520 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:49.198566 ignition[779]: disks: disks passed Jan 30 13:02:49.198617 ignition[779]: Ignition finished successfully Jan 30 13:02:49.201652 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:02:49.204144 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:02:49.205028 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:02:49.206649 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:02:49.208371 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:02:49.209984 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:02:49.218350 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:02:49.231751 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:02:49.235793 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:02:49.248063 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:02:49.303432 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:02:49.304401 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:02:49.305572 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:02:49.318043 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:02:49.319889 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:02:49.320923 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:02:49.320969 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:02:49.320993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:02:49.328244 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Jan 30 13:02:49.328268 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:49.330443 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:49.330490 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:49.330819 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:02:49.333653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:02:49.335762 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:49.338414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:02:49.392314 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:02:49.400397 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:02:49.405228 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:02:49.409251 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:02:49.500742 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:02:49.515026 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:02:49.517759 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:02:49.521929 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:49.543678 ignition[911]: INFO : Ignition 2.20.0 Jan 30 13:02:49.543678 ignition[911]: INFO : Stage: mount Jan 30 13:02:49.543678 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:49.543678 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:49.548333 ignition[911]: INFO : mount: mount passed Jan 30 13:02:49.548333 ignition[911]: INFO : Ignition finished successfully Jan 30 13:02:49.545970 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:02:49.555036 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:02:49.555996 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:02:49.877135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:02:49.893124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:02:49.907957 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 30 13:02:49.910427 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:02:49.910540 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:02:49.910552 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:02:49.917950 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:02:49.917605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:02:49.947343 ignition[942]: INFO : Ignition 2.20.0 Jan 30 13:02:49.947343 ignition[942]: INFO : Stage: files Jan 30 13:02:49.949221 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:49.949221 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:49.949221 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:02:49.953777 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:02:49.953777 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:02:49.953777 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:02:49.953777 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:02:49.953777 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:02:49.952689 unknown[942]: wrote ssh authorized keys file for user: core Jan 30 13:02:49.963936 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:02:49.963936 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:02:50.036237 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:02:50.440551 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:02:50.440551 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:02:50.444743 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:02:50.599355 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:02:50.661143 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:02:50.661143 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:02:50.664805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:02:50.899112 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:02:51.031504 systemd-networkd[759]: eth0: Gained IPv6LL Jan 30 13:02:51.121572 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:02:51.121572 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:02:51.124488 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:02:51.145414 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:02:51.149952 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:02:51.151138 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:02:51.151138 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:02:51.151138 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:02:51.151138 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:02:51.151138 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:02:51.151138 ignition[942]: INFO : files: files passed Jan 30 13:02:51.151138 ignition[942]: INFO : Ignition finished successfully Jan 30 13:02:51.152735 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:02:51.163343 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:02:51.165253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:02:51.167940 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:02:51.168054 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:02:51.174438 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:02:51.177338 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:51.177338 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:51.180520 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:02:51.180205 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:02:51.181922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:02:51.194200 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:02:51.214203 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:02:51.214328 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:02:51.215999 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:02:51.217442 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:02:51.218762 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:02:51.219571 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:02:51.236431 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:02:51.247128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:02:51.255935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:51.257028 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:51.258681 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:02:51.260131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:02:51.260263 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:02:51.262109 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:02:51.263791 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:02:51.265005 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:02:51.266477 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:02:51.268001 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:02:51.269557 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:02:51.270953 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:02:51.272439 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:02:51.273865 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:02:51.275164 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:02:51.276295 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:02:51.276437 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:02:51.278159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:51.279573 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:51.280984 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:02:51.282030 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:51.283225 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:02:51.283353 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:02:51.285438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:02:51.285557 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:02:51.287056 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:02:51.288316 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:02:51.289507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:51.291094 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:02:51.292401 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:02:51.294035 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:02:51.294138 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:02:51.295286 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:02:51.295377 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:02:51.296581 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:02:51.296693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:02:51.297930 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:02:51.298022 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:02:51.309102 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:02:51.310546 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:02:51.311221 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:02:51.311352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:51.312810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:02:51.312972 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:02:51.319022 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:02:51.319119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:02:51.322602 ignition[997]: INFO : Ignition 2.20.0 Jan 30 13:02:51.322602 ignition[997]: INFO : Stage: umount Jan 30 13:02:51.323858 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:02:51.323858 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:02:51.326451 ignition[997]: INFO : umount: umount passed Jan 30 13:02:51.326451 ignition[997]: INFO : Ignition finished successfully Jan 30 13:02:51.325884 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:02:51.326042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:02:51.329583 systemd[1]: Stopped target network.target - Network. Jan 30 13:02:51.330432 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:02:51.330511 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:02:51.331888 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:02:51.331964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:02:51.333723 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:02:51.333768 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:02:51.335199 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:02:51.335244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:02:51.339130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:02:51.340457 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:02:51.342723 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:02:51.349956 systemd-networkd[759]: eth0: DHCPv6 lease lost Jan 30 13:02:51.353077 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:02:51.353206 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:02:51.355583 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:02:51.355725 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:02:51.357969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:02:51.358036 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:51.367051 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:02:51.367815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:02:51.367872 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:02:51.369362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:02:51.369403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:51.370746 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:02:51.370792 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:51.372443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:02:51.372488 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:51.374191 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:51.390004 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:02:51.390129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:02:51.402885 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:02:51.403034 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:51.405079 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:02:51.405123 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:51.406835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:02:51.406879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:51.408452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:02:51.408502 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:02:51.410706 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:02:51.410758 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:02:51.413065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:02:51.413110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:02:51.426092 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:02:51.427174 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:02:51.427247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:51.429163 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:02:51.429207 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:51.432374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:02:51.432466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:51.434312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:02:51.434359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:51.436667 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:02:51.436756 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:02:51.438363 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:02:51.438441 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:02:51.440706 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:02:51.442411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:02:51.442481 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:02:51.454122 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:02:51.463131 systemd[1]: Switching root. Jan 30 13:02:51.504998 systemd-journald[237]: Journal stopped Jan 30 13:02:52.290313 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 30 13:02:52.292279 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:02:52.292296 kernel: SELinux: policy capability open_perms=1 Jan 30 13:02:52.292312 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:02:52.292322 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:02:52.292331 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:02:52.292341 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:02:52.292349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:02:52.292359 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:02:52.292370 kernel: audit: type=1403 audit(1738242171.670:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:02:52.292385 systemd[1]: Successfully loaded SELinux policy in 34.035ms. Jan 30 13:02:52.292405 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.357ms. Jan 30 13:02:52.292416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:02:52.292427 systemd[1]: Detected virtualization kvm. Jan 30 13:02:52.292437 systemd[1]: Detected architecture arm64. Jan 30 13:02:52.292446 systemd[1]: Detected first boot. Jan 30 13:02:52.292456 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:02:52.292466 zram_generator::config[1043]: No configuration found. Jan 30 13:02:52.292480 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:02:52.292490 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:02:52.292500 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:02:52.292510 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:02:52.292522 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:02:52.292533 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:02:52.292543 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:02:52.292553 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:02:52.292565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:02:52.292576 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:02:52.292586 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:02:52.292596 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:02:52.292606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:02:52.292616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:02:52.292627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:02:52.292637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:02:52.292648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:02:52.292659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:02:52.292670 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:02:52.292680 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:02:52.292690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:02:52.292701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:02:52.292711 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:02:52.292721 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:02:52.292734 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:02:52.292745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:02:52.292757 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:02:52.292767 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:02:52.292777 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:02:52.292788 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:02:52.292798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:02:52.292809 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:02:52.292819 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:02:52.292829 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:02:52.292844 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:02:52.292854 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:02:52.292863 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:02:52.292873 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:02:52.292884 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:02:52.292894 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:02:52.292916 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:02:52.292928 systemd[1]: Reached target machines.target - Containers. Jan 30 13:02:52.292941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:02:52.292952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:52.292962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:02:52.292973 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:02:52.292983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:52.292993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:02:52.293003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:52.293014 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:02:52.293025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:52.293037 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:02:52.293049 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:02:52.293060 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:02:52.293070 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:02:52.293080 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:02:52.293090 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:02:52.293100 kernel: fuse: init (API version 7.39) Jan 30 13:02:52.293110 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:02:52.293120 kernel: ACPI: bus type drm_connector registered Jan 30 13:02:52.293131 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:02:52.293141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:02:52.293152 kernel: loop: module loaded Jan 30 13:02:52.293161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:02:52.293171 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:02:52.293181 systemd[1]: Stopped verity-setup.service. Jan 30 13:02:52.293192 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:02:52.293202 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:02:52.293212 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:02:52.293252 systemd-journald[1107]: Collecting audit messages is disabled. Jan 30 13:02:52.293275 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:02:52.293288 systemd-journald[1107]: Journal started Jan 30 13:02:52.293324 systemd-journald[1107]: Runtime Journal (/run/log/journal/c5546b93c68d40ddb02fd15249089e78) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:02:52.070380 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:02:52.095459 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:02:52.095857 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:02:52.300935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:02:52.302605 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:02:52.304654 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:02:52.306438 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:02:52.307719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:02:52.307862 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:02:52.309244 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:02:52.310414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:52.310544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:52.311855 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:02:52.312040 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:02:52.313233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:52.313392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:52.314727 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:02:52.314858 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:02:52.316032 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:52.316170 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:52.317406 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:02:52.318598 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:02:52.319804 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:02:52.333387 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:02:52.341042 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:02:52.343073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:02:52.344036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:02:52.344067 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:02:52.346031 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:02:52.348102 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:02:52.350195 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:02:52.351107 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:52.355119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:02:52.360128 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:02:52.363621 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:02:52.364992 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:02:52.366263 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:02:52.369289 systemd-journald[1107]: Time spent on flushing to /var/log/journal/c5546b93c68d40ddb02fd15249089e78 is 15.498ms for 860 entries. Jan 30 13:02:52.369289 systemd-journald[1107]: System Journal (/var/log/journal/c5546b93c68d40ddb02fd15249089e78) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:02:52.409395 systemd-journald[1107]: Received client request to flush runtime journal. Jan 30 13:02:52.409458 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 13:02:52.370108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:02:52.373171 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:02:52.378693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:02:52.381218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:02:52.382442 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:02:52.383431 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:02:52.384738 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:02:52.396786 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:02:52.401324 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:02:52.406159 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:02:52.410479 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:02:52.416936 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:02:52.423932 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:02:52.428518 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:02:52.439970 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:02:52.440673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:02:52.441938 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:02:52.444067 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:02:52.444092 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:02:52.452478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:02:52.459999 kernel: loop1: detected capacity change from 0 to 113552 Jan 30 13:02:52.460204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:02:52.490979 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:02:52.500118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:02:52.506013 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:02:52.513314 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 30 13:02:52.513335 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 30 13:02:52.518583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:02:52.556935 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 13:02:52.564942 kernel: loop4: detected capacity change from 0 to 113552 Jan 30 13:02:52.570926 kernel: loop5: detected capacity change from 0 to 116784 Jan 30 13:02:52.576245 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:02:52.576653 (sd-merge)[1186]: Merged extensions into '/usr'. Jan 30 13:02:52.581025 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:02:52.581043 systemd[1]: Reloading... Jan 30 13:02:52.644918 zram_generator::config[1215]: No configuration found. Jan 30 13:02:52.700647 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:02:52.749688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:52.787309 systemd[1]: Reloading finished in 205 ms. Jan 30 13:02:52.815505 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:02:52.816759 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:02:52.838284 systemd[1]: Starting ensure-sysext.service... Jan 30 13:02:52.840271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:02:52.852469 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:02:52.852486 systemd[1]: Reloading... Jan 30 13:02:52.867320 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:02:52.867525 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:02:52.868163 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:02:52.868374 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 30 13:02:52.868417 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 30 13:02:52.871781 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:02:52.871931 systemd-tmpfiles[1247]: Skipping /boot Jan 30 13:02:52.880433 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:02:52.880555 systemd-tmpfiles[1247]: Skipping /boot Jan 30 13:02:52.902921 zram_generator::config[1277]: No configuration found. Jan 30 13:02:52.986294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:02:53.022797 systemd[1]: Reloading finished in 170 ms. Jan 30 13:02:53.037491 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:02:53.047318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:02:53.056098 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:02:53.058385 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:02:53.060670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:02:53.075489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:02:53.078269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:02:53.081357 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:02:53.084699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:53.089197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:53.092860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:53.096202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:53.097254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:53.099527 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:02:53.101103 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:02:53.108825 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:02:53.110210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:53.110372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:53.121061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:53.122982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:53.123911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:53.124519 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:53.124677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:53.128876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:02:53.132512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:02:53.134317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:53.134445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:53.135822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:53.135991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:53.137314 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:02:53.146885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:02:53.146973 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Jan 30 13:02:53.162676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:02:53.167220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:02:53.171774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:02:53.175342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:02:53.176623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:02:53.176774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:02:53.178566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:02:53.180219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:02:53.180433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:02:53.181849 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:02:53.182088 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:02:53.186583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:02:53.186794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:02:53.192289 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:02:53.194604 systemd[1]: Finished ensure-sysext.service. Jan 30 13:02:53.197271 augenrules[1378]: No rules Jan 30 13:02:53.210399 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:02:53.210593 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:02:53.220627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:02:53.220811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:02:53.231176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:02:53.232618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:02:53.232710 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:02:53.233923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1356) Jan 30 13:02:53.236328 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:02:53.242513 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:02:53.292582 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:02:53.301210 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:02:53.303147 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:02:53.304399 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:02:53.318844 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:02:53.319554 systemd-networkd[1392]: lo: Link UP Jan 30 13:02:53.319563 systemd-networkd[1392]: lo: Gained carrier Jan 30 13:02:53.323174 systemd-networkd[1392]: Enumeration completed Jan 30 13:02:53.323332 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:02:53.323835 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:53.323847 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:02:53.325016 systemd-networkd[1392]: eth0: Link UP Jan 30 13:02:53.325027 systemd-networkd[1392]: eth0: Gained carrier Jan 30 13:02:53.325042 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:02:53.330175 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:02:53.331025 systemd-resolved[1313]: Positive Trust Anchors: Jan 30 13:02:53.331052 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:02:53.331086 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:02:53.340021 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 30 13:02:53.341970 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:02:53.342738 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Jan 30 13:02:53.802855 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:02:53.802918 systemd-timesyncd[1394]: Initial clock synchronization to Thu 2025-01-30 13:02:53.802735 UTC. Jan 30 13:02:53.804082 systemd-resolved[1313]: Clock change detected. Flushing caches. Jan 30 13:02:53.804126 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:02:53.805415 systemd[1]: Reached target network.target - Network. Jan 30 13:02:53.806105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:02:53.828693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:02:53.846899 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:02:53.854652 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:02:53.871812 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:02:53.878342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:02:53.908248 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:02:53.909747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:02:53.910714 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:02:53.911765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:02:53.912847 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:02:53.914141 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:02:53.915240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:02:53.916413 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:02:53.917669 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:02:53.917711 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:02:53.918484 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:02:53.921900 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:02:53.924690 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:02:53.938342 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:02:53.941927 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:02:53.943852 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:02:53.944864 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:02:53.945641 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:02:53.946484 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:02:53.946521 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:02:53.947692 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:02:53.949744 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:02:53.952518 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:02:53.954152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:02:53.959276 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:02:53.960277 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:02:53.963597 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:02:53.968705 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:02:53.980624 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:02:53.985807 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:02:53.994134 extend-filesystems[1419]: Found loop3 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found loop4 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found loop5 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda1 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda2 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda3 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found usr Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda4 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda6 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda7 Jan 30 13:02:53.994134 extend-filesystems[1419]: Found vda9 Jan 30 13:02:53.994134 extend-filesystems[1419]: Checking size of /dev/vda9 Jan 30 13:02:53.994758 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:02:54.007650 jq[1418]: false Jan 30 13:02:53.998237 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:02:53.999005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:02:54.005117 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:02:54.008644 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:02:54.012429 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:02:54.016732 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:02:54.016912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:02:54.017812 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:02:54.017961 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:02:54.023490 jq[1435]: true Jan 30 13:02:54.039446 dbus-daemon[1417]: [system] SELinux support is enabled Jan 30 13:02:54.042950 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:02:54.049948 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:02:54.050183 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:02:54.061736 jq[1444]: true Jan 30 13:02:54.065699 extend-filesystems[1419]: Resized partition /dev/vda9 Jan 30 13:02:54.077336 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:02:54.078405 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:02:54.081207 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:02:54.081275 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:02:54.084448 tar[1439]: linux-arm64/helm Jan 30 13:02:54.087077 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:02:54.087114 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:02:54.097523 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1361) Jan 30 13:02:54.103393 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:02:54.115485 update_engine[1431]: I20250130 13:02:54.115269 1431 main.cc:92] Flatcar Update Engine starting Jan 30 13:02:54.116486 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:02:54.118071 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:02:54.119140 update_engine[1431]: I20250130 13:02:54.118523 1431 update_check_scheduler.cc:74] Next update check in 4m28s Jan 30 13:02:54.120578 systemd-logind[1426]: New seat seat0. Jan 30 13:02:54.138748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:02:54.139762 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:02:54.180405 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:02:54.208055 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:02:54.208055 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:02:54.208055 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:02:54.211298 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jan 30 13:02:54.210172 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:02:54.210360 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:02:54.219400 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:02:54.220470 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:02:54.222481 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:02:54.231590 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:02:54.331316 containerd[1446]: time="2025-01-30T13:02:54.331226080Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:02:54.358916 containerd[1446]: time="2025-01-30T13:02:54.358850880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360392 containerd[1446]: time="2025-01-30T13:02:54.360318640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360392 containerd[1446]: time="2025-01-30T13:02:54.360361680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:02:54.360392 containerd[1446]: time="2025-01-30T13:02:54.360400040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:02:54.360658 containerd[1446]: time="2025-01-30T13:02:54.360623880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:02:54.360658 containerd[1446]: time="2025-01-30T13:02:54.360654080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360729 containerd[1446]: time="2025-01-30T13:02:54.360713960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360758 containerd[1446]: time="2025-01-30T13:02:54.360731800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360944 containerd[1446]: time="2025-01-30T13:02:54.360921280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360966 containerd[1446]: time="2025-01-30T13:02:54.360945120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360966 containerd[1446]: time="2025-01-30T13:02:54.360959520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:54.360999 containerd[1446]: time="2025-01-30T13:02:54.360969520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.361084 containerd[1446]: time="2025-01-30T13:02:54.361068680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.361308 containerd[1446]: time="2025-01-30T13:02:54.361283760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:02:54.361423 containerd[1446]: time="2025-01-30T13:02:54.361406560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:02:54.361453 containerd[1446]: time="2025-01-30T13:02:54.361425400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:02:54.361524 containerd[1446]: time="2025-01-30T13:02:54.361508720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:02:54.361567 containerd[1446]: time="2025-01-30T13:02:54.361556320Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:02:54.372910 containerd[1446]: time="2025-01-30T13:02:54.372868240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:02:54.373063 containerd[1446]: time="2025-01-30T13:02:54.372975640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:02:54.373063 containerd[1446]: time="2025-01-30T13:02:54.373017760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:02:54.373063 containerd[1446]: time="2025-01-30T13:02:54.373035480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:02:54.373063 containerd[1446]: time="2025-01-30T13:02:54.373053680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:02:54.373274 containerd[1446]: time="2025-01-30T13:02:54.373253440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:02:54.373634 containerd[1446]: time="2025-01-30T13:02:54.373612720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:02:54.373798 containerd[1446]: time="2025-01-30T13:02:54.373777400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:02:54.373828 containerd[1446]: time="2025-01-30T13:02:54.373802640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:02:54.373828 containerd[1446]: time="2025-01-30T13:02:54.373819080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:02:54.373875 containerd[1446]: time="2025-01-30T13:02:54.373844680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373875 containerd[1446]: time="2025-01-30T13:02:54.373859240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373875 containerd[1446]: time="2025-01-30T13:02:54.373870720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373932 containerd[1446]: time="2025-01-30T13:02:54.373884080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373932 containerd[1446]: time="2025-01-30T13:02:54.373912080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373932 containerd[1446]: time="2025-01-30T13:02:54.373926240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373984 containerd[1446]: time="2025-01-30T13:02:54.373938760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373984 containerd[1446]: time="2025-01-30T13:02:54.373951720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:02:54.373984 containerd[1446]: time="2025-01-30T13:02:54.373972080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374032 containerd[1446]: time="2025-01-30T13:02:54.373995240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374032 containerd[1446]: time="2025-01-30T13:02:54.374008600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374032 containerd[1446]: time="2025-01-30T13:02:54.374020720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374086 containerd[1446]: time="2025-01-30T13:02:54.374033320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374086 containerd[1446]: time="2025-01-30T13:02:54.374046320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374086 containerd[1446]: time="2025-01-30T13:02:54.374068520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374086 containerd[1446]: time="2025-01-30T13:02:54.374082120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374152 containerd[1446]: time="2025-01-30T13:02:54.374094960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374152 containerd[1446]: time="2025-01-30T13:02:54.374110880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374152 containerd[1446]: time="2025-01-30T13:02:54.374127760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374152 containerd[1446]: time="2025-01-30T13:02:54.374149240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374215 containerd[1446]: time="2025-01-30T13:02:54.374162680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374215 containerd[1446]: time="2025-01-30T13:02:54.374178080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:02:54.374215 containerd[1446]: time="2025-01-30T13:02:54.374198840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374265 containerd[1446]: time="2025-01-30T13:02:54.374220360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374265 containerd[1446]: time="2025-01-30T13:02:54.374236200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:02:54.374533 containerd[1446]: time="2025-01-30T13:02:54.374511680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:02:54.374652 containerd[1446]: time="2025-01-30T13:02:54.374611120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:02:54.374652 containerd[1446]: time="2025-01-30T13:02:54.374630080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:02:54.374710 containerd[1446]: time="2025-01-30T13:02:54.374657040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:02:54.374710 containerd[1446]: time="2025-01-30T13:02:54.374667360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.374710 containerd[1446]: time="2025-01-30T13:02:54.374686480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:02:54.374710 containerd[1446]: time="2025-01-30T13:02:54.374704280Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:02:54.374798 containerd[1446]: time="2025-01-30T13:02:54.374715080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:02:54.375191 containerd[1446]: time="2025-01-30T13:02:54.375110520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:02:54.375191 containerd[1446]: time="2025-01-30T13:02:54.375166560Z" level=info msg="Connect containerd service" Jan 30 13:02:54.375350 containerd[1446]: time="2025-01-30T13:02:54.375205120Z" level=info msg="using legacy CRI server" Jan 30 13:02:54.375350 containerd[1446]: time="2025-01-30T13:02:54.375213040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:02:54.375495 containerd[1446]: time="2025-01-30T13:02:54.375474960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:02:54.376452 containerd[1446]: time="2025-01-30T13:02:54.376420840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:02:54.376658 containerd[1446]: time="2025-01-30T13:02:54.376628080Z" level=info msg="Start subscribing containerd event" Jan 30 13:02:54.376688 containerd[1446]: time="2025-01-30T13:02:54.376675200Z" level=info msg="Start recovering state" Jan 30 13:02:54.376760 containerd[1446]: time="2025-01-30T13:02:54.376744960Z" level=info msg="Start event monitor" Jan 30 13:02:54.376789 containerd[1446]: time="2025-01-30T13:02:54.376763000Z" level=info msg="Start snapshots syncer" Jan 30 13:02:54.376789 containerd[1446]: time="2025-01-30T13:02:54.376781120Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:02:54.376833 containerd[1446]: time="2025-01-30T13:02:54.376789640Z" level=info msg="Start streaming server" Jan 30 13:02:54.379051 containerd[1446]: time="2025-01-30T13:02:54.379021600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:02:54.379191 containerd[1446]: time="2025-01-30T13:02:54.379174400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:02:54.379730 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:02:54.381180 containerd[1446]: time="2025-01-30T13:02:54.381148200Z" level=info msg="containerd successfully booted in 0.050907s" Jan 30 13:02:54.480439 tar[1439]: linux-arm64/LICENSE Jan 30 13:02:54.480439 tar[1439]: linux-arm64/README.md Jan 30 13:02:54.492005 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:02:54.756761 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:02:54.777098 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:02:54.782715 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:02:54.790090 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:02:54.790278 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:02:54.793199 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:02:54.806434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:02:54.811358 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:02:54.813494 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:02:54.814765 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:02:55.644588 systemd-networkd[1392]: eth0: Gained IPv6LL Jan 30 13:02:55.647994 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:02:55.649904 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:02:55.663866 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:02:55.666342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:02:55.668464 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:02:55.689829 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:02:55.691406 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:02:55.693881 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:02:55.695198 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:02:56.240049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:02:56.241484 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:02:56.244305 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:02:56.246453 systemd[1]: Startup finished in 685ms (kernel) + 4.944s (initrd) + 4.154s (userspace) = 9.785s. Jan 30 13:02:56.257946 agetty[1505]: failed to open credentials directory Jan 30 13:02:56.258037 agetty[1506]: failed to open credentials directory Jan 30 13:02:56.859707 kubelet[1529]: E0130 13:02:56.859632 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:02:56.864088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:02:56.864635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:02:59.933518 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:02:59.935363 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:48502.service - OpenSSH per-connection server daemon (10.0.0.1:48502). Jan 30 13:03:00.058020 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 48502 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.060548 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.080589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:03:00.091897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:03:00.093936 systemd-logind[1426]: New session 1 of user core. Jan 30 13:03:00.101564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:03:00.104508 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:03:00.118359 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:03:00.197519 systemd[1547]: Queued start job for default target default.target. Jan 30 13:03:00.209450 systemd[1547]: Created slice app.slice - User Application Slice. Jan 30 13:03:00.209505 systemd[1547]: Reached target paths.target - Paths. Jan 30 13:03:00.209517 systemd[1547]: Reached target timers.target - Timers. Jan 30 13:03:00.210907 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:03:00.221886 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:03:00.222003 systemd[1547]: Reached target sockets.target - Sockets. Jan 30 13:03:00.222016 systemd[1547]: Reached target basic.target - Basic System. Jan 30 13:03:00.222049 systemd[1547]: Reached target default.target - Main User Target. Jan 30 13:03:00.222077 systemd[1547]: Startup finished in 96ms. Jan 30 13:03:00.222281 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:03:00.223642 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:03:00.283150 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:48506.service - OpenSSH per-connection server daemon (10.0.0.1:48506). Jan 30 13:03:00.326599 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 48506 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.327839 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.332678 systemd-logind[1426]: New session 2 of user core. Jan 30 13:03:00.340658 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:03:00.393115 sshd[1560]: Connection closed by 10.0.0.1 port 48506 Jan 30 13:03:00.393490 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.409896 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:48506.service: Deactivated successfully. Jan 30 13:03:00.411553 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:03:00.413179 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:03:00.414426 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:48516.service - OpenSSH per-connection server daemon (10.0.0.1:48516). Jan 30 13:03:00.415202 systemd-logind[1426]: Removed session 2. Jan 30 13:03:00.458877 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 48516 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.460150 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.472420 systemd-logind[1426]: New session 3 of user core. Jan 30 13:03:00.481585 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:03:00.538442 sshd[1567]: Connection closed by 10.0.0.1 port 48516 Jan 30 13:03:00.539154 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.558415 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:48516.service: Deactivated successfully. Jan 30 13:03:00.560399 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:03:00.562280 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:03:00.576778 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:48522.service - OpenSSH per-connection server daemon (10.0.0.1:48522). Jan 30 13:03:00.580154 systemd-logind[1426]: Removed session 3. Jan 30 13:03:00.618361 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 48522 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.620077 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.624816 systemd-logind[1426]: New session 4 of user core. Jan 30 13:03:00.633578 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:03:00.689277 sshd[1574]: Connection closed by 10.0.0.1 port 48522 Jan 30 13:03:00.689192 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.704202 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:48522.service: Deactivated successfully. Jan 30 13:03:00.710327 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:03:00.713264 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:03:00.714706 systemd-logind[1426]: Removed session 4. Jan 30 13:03:00.730791 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:48524.service - OpenSSH per-connection server daemon (10.0.0.1:48524). Jan 30 13:03:00.778835 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 48524 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.780321 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.785865 systemd-logind[1426]: New session 5 of user core. Jan 30 13:03:00.795574 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:03:00.870661 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:03:00.871260 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:00.892358 sudo[1582]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:00.895275 sshd[1581]: Connection closed by 10.0.0.1 port 48524 Jan 30 13:03:00.895729 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:00.901882 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:48524.service: Deactivated successfully. Jan 30 13:03:00.905053 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:03:00.906746 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:03:00.918354 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Jan 30 13:03:00.920148 systemd-logind[1426]: Removed session 5. Jan 30 13:03:00.966296 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:00.968241 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:00.973563 systemd-logind[1426]: New session 6 of user core. Jan 30 13:03:00.991613 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:03:01.047887 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:03:01.048193 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:01.051635 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:01.057051 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:03:01.057332 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:01.077797 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:03:01.109733 augenrules[1613]: No rules Jan 30 13:03:01.111413 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:03:01.111657 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:03:01.113310 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:01.116348 sshd[1589]: Connection closed by 10.0.0.1 port 48536 Jan 30 13:03:01.117527 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:01.128292 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:48536.service: Deactivated successfully. Jan 30 13:03:01.130855 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:03:01.132943 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:03:01.142758 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:48550.service - OpenSSH per-connection server daemon (10.0.0.1:48550). Jan 30 13:03:01.148759 systemd-logind[1426]: Removed session 6. Jan 30 13:03:01.184503 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 48550 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:01.186299 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:01.191037 systemd-logind[1426]: New session 7 of user core. Jan 30 13:03:01.208592 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:03:01.262057 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:03:01.262360 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:01.696660 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:03:01.696918 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:03:02.175311 dockerd[1646]: time="2025-01-30T13:03:02.175166960Z" level=info msg="Starting up" Jan 30 13:03:02.442059 dockerd[1646]: time="2025-01-30T13:03:02.441925240Z" level=info msg="Loading containers: start." Jan 30 13:03:02.609417 kernel: Initializing XFRM netlink socket Jan 30 13:03:02.694613 systemd-networkd[1392]: docker0: Link UP Jan 30 13:03:02.737586 dockerd[1646]: time="2025-01-30T13:03:02.737534840Z" level=info msg="Loading containers: done." Jan 30 13:03:02.754447 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck616884263-merged.mount: Deactivated successfully. Jan 30 13:03:02.772897 dockerd[1646]: time="2025-01-30T13:03:02.772828480Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:03:02.773103 dockerd[1646]: time="2025-01-30T13:03:02.772946200Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:03:02.773348 dockerd[1646]: time="2025-01-30T13:03:02.773311400Z" level=info msg="Daemon has completed initialization" Jan 30 13:03:02.823176 dockerd[1646]: time="2025-01-30T13:03:02.823113520Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:03:02.823719 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:03:04.037408 containerd[1446]: time="2025-01-30T13:03:04.037343400Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:03:04.840062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297523867.mount: Deactivated successfully. Jan 30 13:03:05.828809 containerd[1446]: time="2025-01-30T13:03:05.828704280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.829472 containerd[1446]: time="2025-01-30T13:03:05.829417080Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 13:03:05.830460 containerd[1446]: time="2025-01-30T13:03:05.830423760Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.834734 containerd[1446]: time="2025-01-30T13:03:05.834686640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:05.835853 containerd[1446]: time="2025-01-30T13:03:05.835815640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.79841092s" Jan 30 13:03:05.835904 containerd[1446]: time="2025-01-30T13:03:05.835853720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 13:03:05.856836 containerd[1446]: time="2025-01-30T13:03:05.856783720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:03:07.103089 containerd[1446]: time="2025-01-30T13:03:07.103017520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:07.104048 containerd[1446]: time="2025-01-30T13:03:07.103780920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 13:03:07.104779 containerd[1446]: time="2025-01-30T13:03:07.104696280Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:07.107609 containerd[1446]: time="2025-01-30T13:03:07.107550720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:07.108799 containerd[1446]: time="2025-01-30T13:03:07.108643400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.25181536s" Jan 30 13:03:07.108799 containerd[1446]: time="2025-01-30T13:03:07.108682400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 13:03:07.114323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:03:07.119721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:07.133220 containerd[1446]: time="2025-01-30T13:03:07.133169680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:03:07.223093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:07.227560 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:03:07.317197 kubelet[1933]: E0130 13:03:07.317106 1933 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:03:07.320652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:03:07.320816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:03:08.292511 containerd[1446]: time="2025-01-30T13:03:08.292463240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:08.294393 containerd[1446]: time="2025-01-30T13:03:08.294315280Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 13:03:08.295516 containerd[1446]: time="2025-01-30T13:03:08.295481160Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:08.299200 containerd[1446]: time="2025-01-30T13:03:08.299155280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:08.300008 containerd[1446]: time="2025-01-30T13:03:08.299856160Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.16649324s" Jan 30 13:03:08.300008 containerd[1446]: time="2025-01-30T13:03:08.299891080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 13:03:08.320124 containerd[1446]: time="2025-01-30T13:03:08.320077160Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:03:09.299350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075401282.mount: Deactivated successfully. Jan 30 13:03:09.599768 containerd[1446]: time="2025-01-30T13:03:09.599612960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:09.601276 containerd[1446]: time="2025-01-30T13:03:09.601221120Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 13:03:09.603092 containerd[1446]: time="2025-01-30T13:03:09.603045640Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:09.606088 containerd[1446]: time="2025-01-30T13:03:09.606035280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:09.606627 containerd[1446]: time="2025-01-30T13:03:09.606592080Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.28647348s" Jan 30 13:03:09.606669 containerd[1446]: time="2025-01-30T13:03:09.606628680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:03:09.627689 containerd[1446]: time="2025-01-30T13:03:09.627644480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:03:10.234855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478589673.mount: Deactivated successfully. Jan 30 13:03:11.191912 containerd[1446]: time="2025-01-30T13:03:11.191783720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.194279 containerd[1446]: time="2025-01-30T13:03:11.194217520Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 13:03:11.197424 containerd[1446]: time="2025-01-30T13:03:11.196270520Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.198938 containerd[1446]: time="2025-01-30T13:03:11.198887880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.200601 containerd[1446]: time="2025-01-30T13:03:11.200549040Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.57286232s" Jan 30 13:03:11.200601 containerd[1446]: time="2025-01-30T13:03:11.200597360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:03:11.221558 containerd[1446]: time="2025-01-30T13:03:11.221508480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:03:11.757819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268077528.mount: Deactivated successfully. Jan 30 13:03:11.768274 containerd[1446]: time="2025-01-30T13:03:11.768213800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.769156 containerd[1446]: time="2025-01-30T13:03:11.769099720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 13:03:11.769852 containerd[1446]: time="2025-01-30T13:03:11.769811080Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.773234 containerd[1446]: time="2025-01-30T13:03:11.772947200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:11.773803 containerd[1446]: time="2025-01-30T13:03:11.773761760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 552.20648ms" Jan 30 13:03:11.773803 containerd[1446]: time="2025-01-30T13:03:11.773803040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 13:03:11.794473 containerd[1446]: time="2025-01-30T13:03:11.794311920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:03:12.405188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881293831.mount: Deactivated successfully. Jan 30 13:03:13.979059 containerd[1446]: time="2025-01-30T13:03:13.978976240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:13.979971 containerd[1446]: time="2025-01-30T13:03:13.979920960Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 13:03:13.981643 containerd[1446]: time="2025-01-30T13:03:13.981611720Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:13.987415 containerd[1446]: time="2025-01-30T13:03:13.986852520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:13.988466 containerd[1446]: time="2025-01-30T13:03:13.988418120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.1940638s" Jan 30 13:03:13.988466 containerd[1446]: time="2025-01-30T13:03:13.988457160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 13:03:17.571095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:03:17.584624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:17.685895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:17.690990 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:03:17.731805 kubelet[2153]: E0130 13:03:17.731730 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:03:17.734446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:03:17.734597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:03:19.493209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:19.504674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:19.523687 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jan 30 13:03:19.523704 systemd[1]: Reloading... Jan 30 13:03:19.599413 zram_generator::config[2214]: No configuration found. Jan 30 13:03:19.715560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:19.770044 systemd[1]: Reloading finished in 246 ms. Jan 30 13:03:19.821525 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:03:19.821590 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:03:19.822463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:19.825144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:19.927568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:19.932674 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:03:19.977408 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:19.977408 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:19.977408 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:19.977770 kubelet[2254]: I0130 13:03:19.977472 2254 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:21.033164 kubelet[2254]: I0130 13:03:21.033123 2254 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:03:21.033164 kubelet[2254]: I0130 13:03:21.033157 2254 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:21.033546 kubelet[2254]: I0130 13:03:21.033417 2254 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:03:21.080295 kubelet[2254]: I0130 13:03:21.080245 2254 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:21.081055 kubelet[2254]: E0130 13:03:21.081034 2254 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.094868 kubelet[2254]: I0130 13:03:21.094839 2254 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:03:21.098871 kubelet[2254]: I0130 13:03:21.098796 2254 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:21.099066 kubelet[2254]: I0130 13:03:21.098868 2254 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:03:21.099323 kubelet[2254]: I0130 13:03:21.099298 2254 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:21.099323 kubelet[2254]: I0130 13:03:21.099311 2254 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:03:21.099990 kubelet[2254]: I0130 13:03:21.099960 2254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:21.102117 kubelet[2254]: I0130 13:03:21.102036 2254 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:03:21.102117 kubelet[2254]: I0130 13:03:21.102066 2254 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:21.102422 kubelet[2254]: I0130 13:03:21.102409 2254 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:03:21.102580 kubelet[2254]: I0130 13:03:21.102519 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:21.102909 kubelet[2254]: W0130 13:03:21.102807 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.102909 kubelet[2254]: E0130 13:03:21.102884 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.103878 kubelet[2254]: W0130 13:03:21.103817 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.103878 kubelet[2254]: E0130 13:03:21.103872 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.104769 kubelet[2254]: I0130 13:03:21.104551 2254 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:03:21.104967 kubelet[2254]: I0130 13:03:21.104947 2254 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:21.105088 kubelet[2254]: W0130 13:03:21.105076 2254 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:03:21.106196 kubelet[2254]: I0130 13:03:21.106097 2254 server.go:1264] "Started kubelet" Jan 30 13:03:21.108153 kubelet[2254]: I0130 13:03:21.107381 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:21.108153 kubelet[2254]: I0130 13:03:21.107410 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:21.108153 kubelet[2254]: I0130 13:03:21.107680 2254 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:21.108153 kubelet[2254]: I0130 13:03:21.107715 2254 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:21.108829 kubelet[2254]: I0130 13:03:21.108808 2254 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:03:21.118566 kubelet[2254]: I0130 13:03:21.118192 2254 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:03:21.118566 kubelet[2254]: I0130 13:03:21.118288 2254 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:03:21.119279 kubelet[2254]: E0130 13:03:21.119234 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Jan 30 13:03:21.120054 kubelet[2254]: I0130 13:03:21.120027 2254 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:21.120054 kubelet[2254]: W0130 13:03:21.120018 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.120192 kubelet[2254]: E0130 13:03:21.120076 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.120246 kubelet[2254]: I0130 13:03:21.120194 2254 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:21.120377 kubelet[2254]: I0130 13:03:21.120288 2254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:03:21.120884 kubelet[2254]: E0130 13:03:21.120863 2254 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:03:21.121741 kubelet[2254]: I0130 13:03:21.121724 2254 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:03:21.133129 kubelet[2254]: E0130 13:03:21.125298 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7a12828538d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:03:21.10606972 +0000 UTC m=+1.170088081,LastTimestamp:2025-01-30 13:03:21.10606972 +0000 UTC m=+1.170088081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:03:21.135835 kubelet[2254]: I0130 13:03:21.135800 2254 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:03:21.135835 kubelet[2254]: I0130 13:03:21.135819 2254 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:21.135835 kubelet[2254]: I0130 13:03:21.135836 2254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:21.137065 kubelet[2254]: I0130 13:03:21.137020 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:21.138337 kubelet[2254]: I0130 13:03:21.138309 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:21.138725 kubelet[2254]: I0130 13:03:21.138710 2254 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:03:21.138762 kubelet[2254]: I0130 13:03:21.138738 2254 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:03:21.138825 kubelet[2254]: E0130 13:03:21.138791 2254 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:21.200850 kubelet[2254]: W0130 13:03:21.200754 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.200850 kubelet[2254]: E0130 13:03:21.200824 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:21.212140 kubelet[2254]: I0130 13:03:21.212102 2254 policy_none.go:49] "None policy: Start" Jan 30 13:03:21.213079 kubelet[2254]: I0130 13:03:21.213049 2254 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:03:21.213079 kubelet[2254]: I0130 13:03:21.213080 2254 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:21.219803 kubelet[2254]: I0130 13:03:21.219757 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:21.220179 kubelet[2254]: E0130 13:03:21.220139 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 30 13:03:21.237650 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:03:21.238966 kubelet[2254]: E0130 13:03:21.238929 2254 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:03:21.248706 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:03:21.251455 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:03:21.262424 kubelet[2254]: I0130 13:03:21.262271 2254 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:21.262859 kubelet[2254]: I0130 13:03:21.262583 2254 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:21.262859 kubelet[2254]: I0130 13:03:21.262722 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:21.263954 kubelet[2254]: E0130 13:03:21.263934 2254 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:03:21.320290 kubelet[2254]: E0130 13:03:21.320168 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Jan 30 13:03:21.421820 kubelet[2254]: I0130 13:03:21.421788 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:21.422175 kubelet[2254]: E0130 13:03:21.422152 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 30 13:03:21.439363 kubelet[2254]: I0130 13:03:21.439321 2254 topology_manager.go:215] "Topology Admit Handler" podUID="8a149a2d54ba89a2973e9df992ec02f3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:03:21.440741 kubelet[2254]: I0130 13:03:21.440625 2254 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:03:21.442145 kubelet[2254]: I0130 13:03:21.442106 2254 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:03:21.449964 systemd[1]: Created slice kubepods-burstable-pod8a149a2d54ba89a2973e9df992ec02f3.slice - libcontainer container kubepods-burstable-pod8a149a2d54ba89a2973e9df992ec02f3.slice. Jan 30 13:03:21.471667 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 13:03:21.492276 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 13:03:21.622330 kubelet[2254]: I0130 13:03:21.622220 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:21.622633 kubelet[2254]: I0130 13:03:21.622498 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:21.622824 kubelet[2254]: I0130 13:03:21.622729 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:21.622925 kubelet[2254]: I0130 13:03:21.622908 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:21.623095 kubelet[2254]: I0130 13:03:21.623007 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:21.623095 kubelet[2254]: I0130 13:03:21.623033 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:21.623095 kubelet[2254]: I0130 13:03:21.623054 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:21.623095 kubelet[2254]: I0130 13:03:21.623070 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:21.623231 kubelet[2254]: I0130 13:03:21.623210 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:21.721001 kubelet[2254]: E0130 13:03:21.720949 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Jan 30 13:03:21.769278 kubelet[2254]: E0130 13:03:21.769249 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:21.770032 containerd[1446]: time="2025-01-30T13:03:21.769996040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a149a2d54ba89a2973e9df992ec02f3,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:21.791217 kubelet[2254]: E0130 13:03:21.791156 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:21.791942 containerd[1446]: time="2025-01-30T13:03:21.791722880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:21.795285 kubelet[2254]: E0130 13:03:21.795261 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:21.795861 containerd[1446]: time="2025-01-30T13:03:21.795827760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:21.824213 kubelet[2254]: I0130 13:03:21.824172 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:21.824514 kubelet[2254]: E0130 13:03:21.824483 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 30 13:03:22.209526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789089223.mount: Deactivated successfully. Jan 30 13:03:22.215140 containerd[1446]: time="2025-01-30T13:03:22.215080840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:22.217466 containerd[1446]: time="2025-01-30T13:03:22.217381200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:03:22.218782 containerd[1446]: time="2025-01-30T13:03:22.218716880Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:22.221542 containerd[1446]: time="2025-01-30T13:03:22.221491920Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:22.222304 containerd[1446]: time="2025-01-30T13:03:22.222092240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:03:22.223699 containerd[1446]: time="2025-01-30T13:03:22.223660360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:22.224429 containerd[1446]: time="2025-01-30T13:03:22.224391120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:03:22.227157 containerd[1446]: time="2025-01-30T13:03:22.227105600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:22.228301 containerd[1446]: time="2025-01-30T13:03:22.228266040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.18676ms" Jan 30 13:03:22.231229 containerd[1446]: time="2025-01-30T13:03:22.231181200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 439.36052ms" Jan 30 13:03:22.235677 containerd[1446]: time="2025-01-30T13:03:22.235633840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 439.7282ms" Jan 30 13:03:22.238277 kubelet[2254]: W0130 13:03:22.238188 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.238277 kubelet[2254]: E0130 13:03:22.238257 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.272962 kubelet[2254]: W0130 13:03:22.272827 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.272962 kubelet[2254]: E0130 13:03:22.272907 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.364395 containerd[1446]: time="2025-01-30T13:03:22.363825560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:22.364395 containerd[1446]: time="2025-01-30T13:03:22.363911960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:22.364395 containerd[1446]: time="2025-01-30T13:03:22.363928360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.364395 containerd[1446]: time="2025-01-30T13:03:22.364021400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.365142 containerd[1446]: time="2025-01-30T13:03:22.365013400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:22.365142 containerd[1446]: time="2025-01-30T13:03:22.365076760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:22.365142 containerd[1446]: time="2025-01-30T13:03:22.365101840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.365283 containerd[1446]: time="2025-01-30T13:03:22.365183560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.365921 containerd[1446]: time="2025-01-30T13:03:22.365664640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:22.365921 containerd[1446]: time="2025-01-30T13:03:22.365712640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:22.365921 containerd[1446]: time="2025-01-30T13:03:22.365724680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.365921 containerd[1446]: time="2025-01-30T13:03:22.365811200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:22.395644 systemd[1]: Started cri-containerd-4d6d16fcada75abf781ae0ac0fceff59b6bcf4816a436191acd00d70e9e65221.scope - libcontainer container 4d6d16fcada75abf781ae0ac0fceff59b6bcf4816a436191acd00d70e9e65221. Jan 30 13:03:22.397080 systemd[1]: Started cri-containerd-bb22f59335333f000a888dbf8115310bf9da692d539f785af1cf6c093a748a60.scope - libcontainer container bb22f59335333f000a888dbf8115310bf9da692d539f785af1cf6c093a748a60. Jan 30 13:03:22.404234 systemd[1]: Started cri-containerd-53bd9219014a2db7aa2532d86f7e45ec6e8408ad6f27ec721c74d896da4b37f6.scope - libcontainer container 53bd9219014a2db7aa2532d86f7e45ec6e8408ad6f27ec721c74d896da4b37f6. Jan 30 13:03:22.430005 containerd[1446]: time="2025-01-30T13:03:22.429956120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d6d16fcada75abf781ae0ac0fceff59b6bcf4816a436191acd00d70e9e65221\"" Jan 30 13:03:22.433790 kubelet[2254]: E0130 13:03:22.433743 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:22.437757 containerd[1446]: time="2025-01-30T13:03:22.437640520Z" level=info msg="CreateContainer within sandbox \"4d6d16fcada75abf781ae0ac0fceff59b6bcf4816a436191acd00d70e9e65221\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:03:22.440513 containerd[1446]: time="2025-01-30T13:03:22.440390760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8a149a2d54ba89a2973e9df992ec02f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb22f59335333f000a888dbf8115310bf9da692d539f785af1cf6c093a748a60\"" Jan 30 13:03:22.442603 kubelet[2254]: E0130 13:03:22.442567 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:22.444999 containerd[1446]: time="2025-01-30T13:03:22.444887480Z" level=info msg="CreateContainer within sandbox \"bb22f59335333f000a888dbf8115310bf9da692d539f785af1cf6c093a748a60\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:03:22.449080 containerd[1446]: time="2025-01-30T13:03:22.449040400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"53bd9219014a2db7aa2532d86f7e45ec6e8408ad6f27ec721c74d896da4b37f6\"" Jan 30 13:03:22.449828 kubelet[2254]: E0130 13:03:22.449800 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:22.451731 containerd[1446]: time="2025-01-30T13:03:22.451704560Z" level=info msg="CreateContainer within sandbox \"53bd9219014a2db7aa2532d86f7e45ec6e8408ad6f27ec721c74d896da4b37f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:03:22.456101 containerd[1446]: time="2025-01-30T13:03:22.456060160Z" level=info msg="CreateContainer within sandbox \"4d6d16fcada75abf781ae0ac0fceff59b6bcf4816a436191acd00d70e9e65221\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb43d554bd5b01b4e4c56a30ec6378e39f720df78723ed54133324f3874a857a\"" Jan 30 13:03:22.456735 containerd[1446]: time="2025-01-30T13:03:22.456701360Z" level=info msg="StartContainer for \"bb43d554bd5b01b4e4c56a30ec6378e39f720df78723ed54133324f3874a857a\"" Jan 30 13:03:22.463081 containerd[1446]: time="2025-01-30T13:03:22.462979600Z" level=info msg="CreateContainer within sandbox \"bb22f59335333f000a888dbf8115310bf9da692d539f785af1cf6c093a748a60\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76f2d53a7b34f7ccafd77cc09f1a265c7cdf00ec5d0b0b8bfecb256a7793debf\"" Jan 30 13:03:22.463856 containerd[1446]: time="2025-01-30T13:03:22.463829280Z" level=info msg="StartContainer for \"76f2d53a7b34f7ccafd77cc09f1a265c7cdf00ec5d0b0b8bfecb256a7793debf\"" Jan 30 13:03:22.471008 containerd[1446]: time="2025-01-30T13:03:22.470876000Z" level=info msg="CreateContainer within sandbox \"53bd9219014a2db7aa2532d86f7e45ec6e8408ad6f27ec721c74d896da4b37f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85b135ddb7c86a19df9d39eb50229ea35ea911837827944b7e95c530c5d5c3f6\"" Jan 30 13:03:22.471631 containerd[1446]: time="2025-01-30T13:03:22.471601360Z" level=info msg="StartContainer for \"85b135ddb7c86a19df9d39eb50229ea35ea911837827944b7e95c530c5d5c3f6\"" Jan 30 13:03:22.483526 systemd[1]: Started cri-containerd-bb43d554bd5b01b4e4c56a30ec6378e39f720df78723ed54133324f3874a857a.scope - libcontainer container bb43d554bd5b01b4e4c56a30ec6378e39f720df78723ed54133324f3874a857a. Jan 30 13:03:22.489752 systemd[1]: Started cri-containerd-76f2d53a7b34f7ccafd77cc09f1a265c7cdf00ec5d0b0b8bfecb256a7793debf.scope - libcontainer container 76f2d53a7b34f7ccafd77cc09f1a265c7cdf00ec5d0b0b8bfecb256a7793debf. Jan 30 13:03:22.495036 systemd[1]: Started cri-containerd-85b135ddb7c86a19df9d39eb50229ea35ea911837827944b7e95c530c5d5c3f6.scope - libcontainer container 85b135ddb7c86a19df9d39eb50229ea35ea911837827944b7e95c530c5d5c3f6. Jan 30 13:03:22.522732 kubelet[2254]: E0130 13:03:22.521513 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Jan 30 13:03:22.525664 containerd[1446]: time="2025-01-30T13:03:22.525506360Z" level=info msg="StartContainer for \"bb43d554bd5b01b4e4c56a30ec6378e39f720df78723ed54133324f3874a857a\" returns successfully" Jan 30 13:03:22.557816 containerd[1446]: time="2025-01-30T13:03:22.555498160Z" level=info msg="StartContainer for \"76f2d53a7b34f7ccafd77cc09f1a265c7cdf00ec5d0b0b8bfecb256a7793debf\" returns successfully" Jan 30 13:03:22.557816 containerd[1446]: time="2025-01-30T13:03:22.555593880Z" level=info msg="StartContainer for \"85b135ddb7c86a19df9d39eb50229ea35ea911837827944b7e95c530c5d5c3f6\" returns successfully" Jan 30 13:03:22.600184 kubelet[2254]: W0130 13:03:22.600053 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.600184 kubelet[2254]: E0130 13:03:22.600120 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.625603 kubelet[2254]: I0130 13:03:22.625563 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:22.626141 kubelet[2254]: E0130 13:03:22.626013 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Jan 30 13:03:22.707829 kubelet[2254]: W0130 13:03:22.707737 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:22.707829 kubelet[2254]: E0130 13:03:22.707820 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Jan 30 13:03:23.148095 kubelet[2254]: E0130 13:03:23.148060 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.153021 kubelet[2254]: E0130 13:03:23.152997 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:23.155045 kubelet[2254]: E0130 13:03:23.155027 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:24.156139 kubelet[2254]: E0130 13:03:24.156090 2254 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:03:24.157429 kubelet[2254]: E0130 13:03:24.157394 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:24.227681 kubelet[2254]: I0130 13:03:24.227592 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:24.244278 kubelet[2254]: I0130 13:03:24.244231 2254 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:03:24.262332 kubelet[2254]: E0130 13:03:24.262293 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.362590 kubelet[2254]: E0130 13:03:24.362520 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.463107 kubelet[2254]: E0130 13:03:24.462966 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.563754 kubelet[2254]: E0130 13:03:24.563710 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.664668 kubelet[2254]: E0130 13:03:24.664528 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.765539 kubelet[2254]: E0130 13:03:24.765427 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.866051 kubelet[2254]: E0130 13:03:24.866009 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:24.966757 kubelet[2254]: E0130 13:03:24.966712 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:25.067970 kubelet[2254]: E0130 13:03:25.067833 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:25.171327 kubelet[2254]: E0130 13:03:25.169998 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:25.270579 kubelet[2254]: E0130 13:03:25.270455 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:25.371016 kubelet[2254]: E0130 13:03:25.370888 2254 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:03:26.105211 kubelet[2254]: I0130 13:03:26.105162 2254 apiserver.go:52] "Watching apiserver" Jan 30 13:03:26.118718 kubelet[2254]: I0130 13:03:26.118663 2254 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:03:26.398947 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-7.scope)... Jan 30 13:03:26.398966 systemd[1]: Reloading... Jan 30 13:03:26.478401 zram_generator::config[2572]: No configuration found. Jan 30 13:03:26.571570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:26.638442 systemd[1]: Reloading finished in 239 ms. Jan 30 13:03:26.673437 kubelet[2254]: I0130 13:03:26.673240 2254 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:26.673876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:26.691255 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:03:26.691681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:26.691890 systemd[1]: kubelet.service: Consumed 1.577s CPU time, 118.6M memory peak, 0B memory swap peak. Jan 30 13:03:26.701900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:26.808952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:26.814683 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:03:26.859719 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:26.860822 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:26.860822 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:26.860822 kubelet[2614]: I0130 13:03:26.859823 2614 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:26.867494 kubelet[2614]: I0130 13:03:26.867447 2614 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:03:26.867494 kubelet[2614]: I0130 13:03:26.867475 2614 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:26.867719 kubelet[2614]: I0130 13:03:26.867693 2614 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:03:26.869195 kubelet[2614]: I0130 13:03:26.869157 2614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:03:26.870522 kubelet[2614]: I0130 13:03:26.870476 2614 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:26.879644 kubelet[2614]: I0130 13:03:26.879612 2614 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:03:26.882396 kubelet[2614]: I0130 13:03:26.880138 2614 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:26.882396 kubelet[2614]: I0130 13:03:26.880178 2614 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:03:26.882396 kubelet[2614]: I0130 13:03:26.880357 2614 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:26.882396 kubelet[2614]: I0130 13:03:26.880391 2614 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:03:26.882396 kubelet[2614]: I0130 13:03:26.880428 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.880535 2614 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.880549 2614 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.880576 2614 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.880589 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.881506 2614 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.881663 2614 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:26.882636 kubelet[2614]: I0130 13:03:26.882070 2614 server.go:1264] "Started kubelet" Jan 30 13:03:26.883696 kubelet[2614]: I0130 13:03:26.883671 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:26.886244 kubelet[2614]: I0130 13:03:26.886192 2614 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:26.886814 kubelet[2614]: I0130 13:03:26.886753 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:26.887071 kubelet[2614]: I0130 13:03:26.887053 2614 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:26.887616 kubelet[2614]: I0130 13:03:26.887598 2614 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:03:26.887857 kubelet[2614]: I0130 13:03:26.887836 2614 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:03:26.888057 kubelet[2614]: I0130 13:03:26.888034 2614 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:26.888602 kubelet[2614]: I0130 13:03:26.888571 2614 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:03:26.900423 kubelet[2614]: E0130 13:03:26.900392 2614 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:03:26.900700 kubelet[2614]: I0130 13:03:26.900675 2614 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:26.900805 kubelet[2614]: I0130 13:03:26.900771 2614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:03:26.907389 kubelet[2614]: I0130 13:03:26.907333 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:26.908473 kubelet[2614]: I0130 13:03:26.907609 2614 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:03:26.911721 kubelet[2614]: I0130 13:03:26.911679 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:26.911813 kubelet[2614]: I0130 13:03:26.911733 2614 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:03:26.911813 kubelet[2614]: I0130 13:03:26.911753 2614 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:03:26.911866 kubelet[2614]: E0130 13:03:26.911809 2614 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:26.951725 kubelet[2614]: I0130 13:03:26.951630 2614 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:03:26.951885 kubelet[2614]: I0130 13:03:26.951868 2614 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:26.951961 kubelet[2614]: I0130 13:03:26.951952 2614 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:26.952222 kubelet[2614]: I0130 13:03:26.952206 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:03:26.952324 kubelet[2614]: I0130 13:03:26.952299 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:03:26.952412 kubelet[2614]: I0130 13:03:26.952403 2614 policy_none.go:49] "None policy: Start" Jan 30 13:03:26.953284 kubelet[2614]: I0130 13:03:26.953268 2614 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:03:26.953399 kubelet[2614]: I0130 13:03:26.953388 2614 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:26.953619 kubelet[2614]: I0130 13:03:26.953607 2614 state_mem.go:75] "Updated machine memory state" Jan 30 13:03:26.958587 kubelet[2614]: I0130 13:03:26.958533 2614 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:26.958795 kubelet[2614]: I0130 13:03:26.958750 2614 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:26.958872 kubelet[2614]: I0130 13:03:26.958857 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:26.991157 kubelet[2614]: I0130 13:03:26.991120 2614 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:03:26.999730 kubelet[2614]: I0130 13:03:26.999692 2614 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:03:26.999874 kubelet[2614]: I0130 13:03:26.999793 2614 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:03:27.011919 kubelet[2614]: I0130 13:03:27.011879 2614 topology_manager.go:215] "Topology Admit Handler" podUID="8a149a2d54ba89a2973e9df992ec02f3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:03:27.012051 kubelet[2614]: I0130 13:03:27.011983 2614 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:03:27.012051 kubelet[2614]: I0130 13:03:27.012019 2614 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:03:27.089247 kubelet[2614]: I0130 13:03:27.089211 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:27.089247 kubelet[2614]: I0130 13:03:27.089249 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:27.089436 kubelet[2614]: I0130 13:03:27.089273 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:27.089436 kubelet[2614]: I0130 13:03:27.089318 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:27.089436 kubelet[2614]: I0130 13:03:27.089378 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:27.089436 kubelet[2614]: I0130 13:03:27.089400 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:27.089436 kubelet[2614]: I0130 13:03:27.089422 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a149a2d54ba89a2973e9df992ec02f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8a149a2d54ba89a2973e9df992ec02f3\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:27.089547 kubelet[2614]: I0130 13:03:27.089441 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:03:27.089547 kubelet[2614]: I0130 13:03:27.089464 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:27.345617 kubelet[2614]: E0130 13:03:27.345413 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.345617 kubelet[2614]: E0130 13:03:27.345456 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.347110 kubelet[2614]: E0130 13:03:27.347081 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.409943 sudo[2651]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:03:27.410214 sudo[2651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:03:27.845017 sudo[2651]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:27.882179 kubelet[2614]: I0130 13:03:27.881925 2614 apiserver.go:52] "Watching apiserver" Jan 30 13:03:27.888298 kubelet[2614]: I0130 13:03:27.888258 2614 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:03:27.934736 kubelet[2614]: E0130 13:03:27.933990 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.942378 kubelet[2614]: E0130 13:03:27.942291 2614 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:03:27.944033 kubelet[2614]: E0130 13:03:27.942970 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.946691 kubelet[2614]: E0130 13:03:27.946215 2614 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:03:27.946691 kubelet[2614]: E0130 13:03:27.946628 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:27.974361 kubelet[2614]: I0130 13:03:27.974304 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.974289575 podStartE2EDuration="974.289575ms" podCreationTimestamp="2025-01-30 13:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:27.974017901 +0000 UTC m=+1.155906422" watchObservedRunningTime="2025-01-30 13:03:27.974289575 +0000 UTC m=+1.156178056" Jan 30 13:03:27.992587 kubelet[2614]: I0130 13:03:27.992245 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.99222684 podStartE2EDuration="992.22684ms" podCreationTimestamp="2025-01-30 13:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:27.992062123 +0000 UTC m=+1.173950644" watchObservedRunningTime="2025-01-30 13:03:27.99222684 +0000 UTC m=+1.174115321" Jan 30 13:03:28.940219 kubelet[2614]: E0130 13:03:28.938679 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:28.940219 kubelet[2614]: E0130 13:03:28.938817 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:30.085284 sudo[1625]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:30.087382 sshd[1624]: Connection closed by 10.0.0.1 port 48550 Jan 30 13:03:30.087908 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:30.090820 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:48550.service: Deactivated successfully. Jan 30 13:03:30.092503 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:03:30.092732 systemd[1]: session-7.scope: Consumed 8.666s CPU time, 190.7M memory peak, 0B memory swap peak. Jan 30 13:03:30.094343 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:03:30.095271 systemd-logind[1426]: Removed session 7. Jan 30 13:03:33.807045 kubelet[2614]: E0130 13:03:33.806996 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:33.828185 kubelet[2614]: I0130 13:03:33.828119 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.828104691 podStartE2EDuration="6.828104691s" podCreationTimestamp="2025-01-30 13:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:28.00431055 +0000 UTC m=+1.186199071" watchObservedRunningTime="2025-01-30 13:03:33.828104691 +0000 UTC m=+7.009993212" Jan 30 13:03:33.945120 kubelet[2614]: E0130 13:03:33.945087 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:34.022730 kubelet[2614]: E0130 13:03:34.021886 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:34.946204 kubelet[2614]: E0130 13:03:34.946121 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:37.444273 kubelet[2614]: E0130 13:03:37.442641 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:39.470957 update_engine[1431]: I20250130 13:03:39.470876 1431 update_attempter.cc:509] Updating boot flags... Jan 30 13:03:39.558479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2701) Jan 30 13:03:39.612439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2699) Jan 30 13:03:39.666409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2699) Jan 30 13:03:43.024278 kubelet[2614]: I0130 13:03:43.024243 2614 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:03:43.030463 containerd[1446]: time="2025-01-30T13:03:43.030361606Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:03:43.030940 kubelet[2614]: I0130 13:03:43.030769 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:03:43.933745 kubelet[2614]: I0130 13:03:43.933477 2614 topology_manager.go:215] "Topology Admit Handler" podUID="d4b2ba3a-02d3-41ab-adc1-33724022b138" podNamespace="kube-system" podName="kube-proxy-m52k4" Jan 30 13:03:43.956934 kubelet[2614]: I0130 13:03:43.956888 2614 topology_manager.go:215] "Topology Admit Handler" podUID="42faffb4-debe-4df5-9510-5009476b4235" podNamespace="kube-system" podName="cilium-bstts" Jan 30 13:03:43.970487 systemd[1]: Created slice kubepods-besteffort-podd4b2ba3a_02d3_41ab_adc1_33724022b138.slice - libcontainer container kubepods-besteffort-podd4b2ba3a_02d3_41ab_adc1_33724022b138.slice. Jan 30 13:03:43.991605 kubelet[2614]: I0130 13:03:43.991352 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-etc-cni-netd\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.991605 kubelet[2614]: I0130 13:03:43.991430 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-lib-modules\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.991605 kubelet[2614]: I0130 13:03:43.991513 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-xtables-lock\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.992063 kubelet[2614]: I0130 13:03:43.991926 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4b2ba3a-02d3-41ab-adc1-33724022b138-kube-proxy\") pod \"kube-proxy-m52k4\" (UID: \"d4b2ba3a-02d3-41ab-adc1-33724022b138\") " pod="kube-system/kube-proxy-m52k4" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992427 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-bpf-maps\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992509 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4b2ba3a-02d3-41ab-adc1-33724022b138-lib-modules\") pod \"kube-proxy-m52k4\" (UID: \"d4b2ba3a-02d3-41ab-adc1-33724022b138\") " pod="kube-system/kube-proxy-m52k4" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992531 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xztqs\" (UniqueName: \"kubernetes.io/projected/d4b2ba3a-02d3-41ab-adc1-33724022b138-kube-api-access-xztqs\") pod \"kube-proxy-m52k4\" (UID: \"d4b2ba3a-02d3-41ab-adc1-33724022b138\") " pod="kube-system/kube-proxy-m52k4" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992561 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-run\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992579 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cni-path\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.992623 kubelet[2614]: I0130 13:03:43.992603 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-net\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993048 kubelet[2614]: I0130 13:03:43.992623 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmpf5\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-kube-api-access-jmpf5\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993048 kubelet[2614]: I0130 13:03:43.992640 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-hostproc\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993048 kubelet[2614]: I0130 13:03:43.992666 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42faffb4-debe-4df5-9510-5009476b4235-clustermesh-secrets\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993048 kubelet[2614]: I0130 13:03:43.992683 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-kernel\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993048 kubelet[2614]: I0130 13:03:43.992698 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4b2ba3a-02d3-41ab-adc1-33724022b138-xtables-lock\") pod \"kube-proxy-m52k4\" (UID: \"d4b2ba3a-02d3-41ab-adc1-33724022b138\") " pod="kube-system/kube-proxy-m52k4" Jan 30 13:03:43.993357 kubelet[2614]: I0130 13:03:43.992715 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-cgroup\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993357 kubelet[2614]: I0130 13:03:43.992730 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42faffb4-debe-4df5-9510-5009476b4235-cilium-config-path\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:43.993357 kubelet[2614]: I0130 13:03:43.992746 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-hubble-tls\") pod \"cilium-bstts\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " pod="kube-system/cilium-bstts" Jan 30 13:03:44.000090 systemd[1]: Created slice kubepods-burstable-pod42faffb4_debe_4df5_9510_5009476b4235.slice - libcontainer container kubepods-burstable-pod42faffb4_debe_4df5_9510_5009476b4235.slice. Jan 30 13:03:44.002217 kubelet[2614]: I0130 13:03:44.002156 2614 topology_manager.go:215] "Topology Admit Handler" podUID="3e222a86-ac88-4096-ac1e-0eed30cf8a29" podNamespace="kube-system" podName="cilium-operator-599987898-gf5lr" Jan 30 13:03:44.013278 systemd[1]: Created slice kubepods-besteffort-pod3e222a86_ac88_4096_ac1e_0eed30cf8a29.slice - libcontainer container kubepods-besteffort-pod3e222a86_ac88_4096_ac1e_0eed30cf8a29.slice. Jan 30 13:03:44.093352 kubelet[2614]: I0130 13:03:44.093232 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e222a86-ac88-4096-ac1e-0eed30cf8a29-cilium-config-path\") pod \"cilium-operator-599987898-gf5lr\" (UID: \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\") " pod="kube-system/cilium-operator-599987898-gf5lr" Jan 30 13:03:44.096036 kubelet[2614]: I0130 13:03:44.096008 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5b7m\" (UniqueName: \"kubernetes.io/projected/3e222a86-ac88-4096-ac1e-0eed30cf8a29-kube-api-access-l5b7m\") pod \"cilium-operator-599987898-gf5lr\" (UID: \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\") " pod="kube-system/cilium-operator-599987898-gf5lr" Jan 30 13:03:44.291661 kubelet[2614]: E0130 13:03:44.291352 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.296545 containerd[1446]: time="2025-01-30T13:03:44.296498738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m52k4,Uid:d4b2ba3a-02d3-41ab-adc1-33724022b138,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:44.305903 kubelet[2614]: E0130 13:03:44.305855 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.307843 containerd[1446]: time="2025-01-30T13:03:44.307278903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bstts,Uid:42faffb4-debe-4df5-9510-5009476b4235,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:44.319517 kubelet[2614]: E0130 13:03:44.319482 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.320247 containerd[1446]: time="2025-01-30T13:03:44.320009174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:44.320247 containerd[1446]: time="2025-01-30T13:03:44.320078573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:44.320247 containerd[1446]: time="2025-01-30T13:03:44.320096533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.320247 containerd[1446]: time="2025-01-30T13:03:44.320222532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.321585 containerd[1446]: time="2025-01-30T13:03:44.321535443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gf5lr,Uid:3e222a86-ac88-4096-ac1e-0eed30cf8a29,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:44.330321 containerd[1446]: time="2025-01-30T13:03:44.330185183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:44.330321 containerd[1446]: time="2025-01-30T13:03:44.330287222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:44.330321 containerd[1446]: time="2025-01-30T13:03:44.330298542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.330863 containerd[1446]: time="2025-01-30T13:03:44.330801578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.343584 systemd[1]: Started cri-containerd-45edae0fb572ccfe9d237e6669fad16ef3b732a7c6afc44ed599e7fb5aa06360.scope - libcontainer container 45edae0fb572ccfe9d237e6669fad16ef3b732a7c6afc44ed599e7fb5aa06360. Jan 30 13:03:44.346736 systemd[1]: Started cri-containerd-a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b.scope - libcontainer container a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b. Jan 30 13:03:44.374957 containerd[1446]: time="2025-01-30T13:03:44.374816871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bstts,Uid:42faffb4-debe-4df5-9510-5009476b4235,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\"" Jan 30 13:03:44.381901 kubelet[2614]: E0130 13:03:44.381872 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.382361 containerd[1446]: time="2025-01-30T13:03:44.379107641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m52k4,Uid:d4b2ba3a-02d3-41ab-adc1-33724022b138,Namespace:kube-system,Attempt:0,} returns sandbox id \"45edae0fb572ccfe9d237e6669fad16ef3b732a7c6afc44ed599e7fb5aa06360\"" Jan 30 13:03:44.383665 kubelet[2614]: E0130 13:03:44.383645 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.383951 containerd[1446]: time="2025-01-30T13:03:44.383824768Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:03:44.387756 containerd[1446]: time="2025-01-30T13:03:44.387477742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:44.387756 containerd[1446]: time="2025-01-30T13:03:44.387543182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:44.387756 containerd[1446]: time="2025-01-30T13:03:44.387563421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.387756 containerd[1446]: time="2025-01-30T13:03:44.387653021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:44.391860 containerd[1446]: time="2025-01-30T13:03:44.391739232Z" level=info msg="CreateContainer within sandbox \"45edae0fb572ccfe9d237e6669fad16ef3b732a7c6afc44ed599e7fb5aa06360\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:03:44.405564 systemd[1]: Started cri-containerd-4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce.scope - libcontainer container 4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce. Jan 30 13:03:44.408882 containerd[1446]: time="2025-01-30T13:03:44.408774993Z" level=info msg="CreateContainer within sandbox \"45edae0fb572ccfe9d237e6669fad16ef3b732a7c6afc44ed599e7fb5aa06360\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6640e4763b82fe8a5fa478bc0fcd68b56e8dcc3476c499352adbcc2ac01849a\"" Jan 30 13:03:44.411670 containerd[1446]: time="2025-01-30T13:03:44.411587293Z" level=info msg="StartContainer for \"a6640e4763b82fe8a5fa478bc0fcd68b56e8dcc3476c499352adbcc2ac01849a\"" Jan 30 13:03:44.442926 systemd[1]: Started cri-containerd-a6640e4763b82fe8a5fa478bc0fcd68b56e8dcc3476c499352adbcc2ac01849a.scope - libcontainer container a6640e4763b82fe8a5fa478bc0fcd68b56e8dcc3476c499352adbcc2ac01849a. Jan 30 13:03:44.443260 containerd[1446]: time="2025-01-30T13:03:44.443110593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gf5lr,Uid:3e222a86-ac88-4096-ac1e-0eed30cf8a29,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\"" Jan 30 13:03:44.444444 kubelet[2614]: E0130 13:03:44.444347 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:44.480889 containerd[1446]: time="2025-01-30T13:03:44.480843689Z" level=info msg="StartContainer for \"a6640e4763b82fe8a5fa478bc0fcd68b56e8dcc3476c499352adbcc2ac01849a\" returns successfully" Jan 30 13:03:44.976170 kubelet[2614]: E0130 13:03:44.976132 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:49.963552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099260671.mount: Deactivated successfully. Jan 30 13:03:51.259864 containerd[1446]: time="2025-01-30T13:03:51.259808935Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:51.260820 containerd[1446]: time="2025-01-30T13:03:51.260529252Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:03:51.261492 containerd[1446]: time="2025-01-30T13:03:51.261436288Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:51.263408 containerd[1446]: time="2025-01-30T13:03:51.263346120Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.879481953s" Jan 30 13:03:51.263472 containerd[1446]: time="2025-01-30T13:03:51.263410839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:03:51.266791 containerd[1446]: time="2025-01-30T13:03:51.266405986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:03:51.271672 containerd[1446]: time="2025-01-30T13:03:51.271614643Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:03:51.296116 containerd[1446]: time="2025-01-30T13:03:51.296068254Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\"" Jan 30 13:03:51.296953 containerd[1446]: time="2025-01-30T13:03:51.296826291Z" level=info msg="StartContainer for \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\"" Jan 30 13:03:51.332600 systemd[1]: Started cri-containerd-771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7.scope - libcontainer container 771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7. Jan 30 13:03:51.359158 containerd[1446]: time="2025-01-30T13:03:51.359113573Z" level=info msg="StartContainer for \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\" returns successfully" Jan 30 13:03:51.402816 systemd[1]: cri-containerd-771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7.scope: Deactivated successfully. Jan 30 13:03:51.552149 containerd[1446]: time="2025-01-30T13:03:51.547926973Z" level=info msg="shim disconnected" id=771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7 namespace=k8s.io Jan 30 13:03:51.552149 containerd[1446]: time="2025-01-30T13:03:51.552148474Z" level=warning msg="cleaning up after shim disconnected" id=771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7 namespace=k8s.io Jan 30 13:03:51.552395 containerd[1446]: time="2025-01-30T13:03:51.552167594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:52.012565 kubelet[2614]: E0130 13:03:52.011943 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:52.015413 containerd[1446]: time="2025-01-30T13:03:52.015355136Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:03:52.074157 kubelet[2614]: I0130 13:03:52.074066 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m52k4" podStartSLOduration=9.074043411 podStartE2EDuration="9.074043411s" podCreationTimestamp="2025-01-30 13:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:03:44.987054229 +0000 UTC m=+18.168942750" watchObservedRunningTime="2025-01-30 13:03:52.074043411 +0000 UTC m=+25.255931932" Jan 30 13:03:52.084593 containerd[1446]: time="2025-01-30T13:03:52.084528327Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\"" Jan 30 13:03:52.085476 containerd[1446]: time="2025-01-30T13:03:52.085067605Z" level=info msg="StartContainer for \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\"" Jan 30 13:03:52.113590 systemd[1]: Started cri-containerd-d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22.scope - libcontainer container d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22. Jan 30 13:03:52.144918 containerd[1446]: time="2025-01-30T13:03:52.144862395Z" level=info msg="StartContainer for \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\" returns successfully" Jan 30 13:03:52.170239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:03:52.170466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:03:52.170546 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:03:52.177765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:03:52.177967 systemd[1]: cri-containerd-d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22.scope: Deactivated successfully. Jan 30 13:03:52.248195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:03:52.271224 containerd[1446]: time="2025-01-30T13:03:52.271165828Z" level=info msg="shim disconnected" id=d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22 namespace=k8s.io Jan 30 13:03:52.271224 containerd[1446]: time="2025-01-30T13:03:52.271223988Z" level=warning msg="cleaning up after shim disconnected" id=d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22 namespace=k8s.io Jan 30 13:03:52.273362 containerd[1446]: time="2025-01-30T13:03:52.271233708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:52.283938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7-rootfs.mount: Deactivated successfully. Jan 30 13:03:52.526453 containerd[1446]: time="2025-01-30T13:03:52.526322123Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:52.527412 containerd[1446]: time="2025-01-30T13:03:52.527310479Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:03:52.528064 containerd[1446]: time="2025-01-30T13:03:52.527913757Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:03:52.529940 containerd[1446]: time="2025-01-30T13:03:52.529807709Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.263362483s" Jan 30 13:03:52.529940 containerd[1446]: time="2025-01-30T13:03:52.529851429Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:03:52.532222 containerd[1446]: time="2025-01-30T13:03:52.532088379Z" level=info msg="CreateContainer within sandbox \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:03:52.545291 containerd[1446]: time="2025-01-30T13:03:52.545243004Z" level=info msg="CreateContainer within sandbox \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\"" Jan 30 13:03:52.546116 containerd[1446]: time="2025-01-30T13:03:52.546081441Z" level=info msg="StartContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\"" Jan 30 13:03:52.577576 systemd[1]: Started cri-containerd-ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4.scope - libcontainer container ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4. Jan 30 13:03:52.600880 containerd[1446]: time="2025-01-30T13:03:52.600836692Z" level=info msg="StartContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" returns successfully" Jan 30 13:03:53.021821 kubelet[2614]: E0130 13:03:53.020303 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:53.022247 kubelet[2614]: E0130 13:03:53.021966 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:53.024605 containerd[1446]: time="2025-01-30T13:03:53.024562970Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:03:53.037543 kubelet[2614]: I0130 13:03:53.037198 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gf5lr" podStartSLOduration=1.9540473779999998 podStartE2EDuration="10.037177281s" podCreationTimestamp="2025-01-30 13:03:43 +0000 UTC" firstStartedPulling="2025-01-30 13:03:44.447400683 +0000 UTC m=+17.629289204" lastFinishedPulling="2025-01-30 13:03:52.530530586 +0000 UTC m=+25.712419107" observedRunningTime="2025-01-30 13:03:53.037159401 +0000 UTC m=+26.219047922" watchObservedRunningTime="2025-01-30 13:03:53.037177281 +0000 UTC m=+26.219065802" Jan 30 13:03:53.059020 containerd[1446]: time="2025-01-30T13:03:53.058965156Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\"" Jan 30 13:03:53.060341 containerd[1446]: time="2025-01-30T13:03:53.060294550Z" level=info msg="StartContainer for \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\"" Jan 30 13:03:53.117659 systemd[1]: Started cri-containerd-a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3.scope - libcontainer container a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3. Jan 30 13:03:53.172630 systemd[1]: cri-containerd-a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3.scope: Deactivated successfully. Jan 30 13:03:53.257789 containerd[1446]: time="2025-01-30T13:03:53.257738298Z" level=info msg="StartContainer for \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\" returns successfully" Jan 30 13:03:53.297521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3-rootfs.mount: Deactivated successfully. Jan 30 13:03:53.301237 containerd[1446]: time="2025-01-30T13:03:53.301147848Z" level=info msg="shim disconnected" id=a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3 namespace=k8s.io Jan 30 13:03:53.301237 containerd[1446]: time="2025-01-30T13:03:53.301221688Z" level=warning msg="cleaning up after shim disconnected" id=a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3 namespace=k8s.io Jan 30 13:03:53.301237 containerd[1446]: time="2025-01-30T13:03:53.301230408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:54.035949 kubelet[2614]: E0130 13:03:54.035896 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:54.038110 kubelet[2614]: E0130 13:03:54.035905 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:54.042140 containerd[1446]: time="2025-01-30T13:03:54.042094799Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:03:54.091583 containerd[1446]: time="2025-01-30T13:03:54.091528257Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\"" Jan 30 13:03:54.092651 containerd[1446]: time="2025-01-30T13:03:54.092617773Z" level=info msg="StartContainer for \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\"" Jan 30 13:03:54.136414 systemd[1]: Started cri-containerd-46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca.scope - libcontainer container 46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca. Jan 30 13:03:54.172617 systemd[1]: cri-containerd-46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca.scope: Deactivated successfully. Jan 30 13:03:54.176195 containerd[1446]: time="2025-01-30T13:03:54.176154347Z" level=info msg="StartContainer for \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\" returns successfully" Jan 30 13:03:54.204053 containerd[1446]: time="2025-01-30T13:03:54.203967125Z" level=info msg="shim disconnected" id=46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca namespace=k8s.io Jan 30 13:03:54.204053 containerd[1446]: time="2025-01-30T13:03:54.204025285Z" level=warning msg="cleaning up after shim disconnected" id=46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca namespace=k8s.io Jan 30 13:03:54.204053 containerd[1446]: time="2025-01-30T13:03:54.204036725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:03:54.283784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca-rootfs.mount: Deactivated successfully. Jan 30 13:03:55.041851 kubelet[2614]: E0130 13:03:55.041611 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:55.046449 containerd[1446]: time="2025-01-30T13:03:55.046404485Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:03:55.055771 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). Jan 30 13:03:55.071933 containerd[1446]: time="2025-01-30T13:03:55.071886558Z" level=info msg="CreateContainer within sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\"" Jan 30 13:03:55.073974 containerd[1446]: time="2025-01-30T13:03:55.072459436Z" level=info msg="StartContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\"" Jan 30 13:03:55.100581 systemd[1]: Started cri-containerd-adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d.scope - libcontainer container adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d. Jan 30 13:03:55.130920 containerd[1446]: time="2025-01-30T13:03:55.130873435Z" level=info msg="StartContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" returns successfully" Jan 30 13:03:55.131764 sshd[3315]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:55.133224 sshd-session[3315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:55.139627 systemd-logind[1426]: New session 8 of user core. Jan 30 13:03:55.148567 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:03:55.247687 kubelet[2614]: I0130 13:03:55.246564 2614 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:03:55.296868 kubelet[2614]: I0130 13:03:55.292151 2614 topology_manager.go:215] "Topology Admit Handler" podUID="c4ab6b27-5631-43cc-983b-d658210322e8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9sr89" Jan 30 13:03:55.296868 kubelet[2614]: I0130 13:03:55.293275 2614 topology_manager.go:215] "Topology Admit Handler" podUID="0552e283-6285-4240-8dc2-8749fb1ffe38" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mlnzp" Jan 30 13:03:55.316635 systemd[1]: Created slice kubepods-burstable-podc4ab6b27_5631_43cc_983b_d658210322e8.slice - libcontainer container kubepods-burstable-podc4ab6b27_5631_43cc_983b_d658210322e8.slice. Jan 30 13:03:55.324907 systemd[1]: Created slice kubepods-burstable-pod0552e283_6285_4240_8dc2_8749fb1ffe38.slice - libcontainer container kubepods-burstable-pod0552e283_6285_4240_8dc2_8749fb1ffe38.slice. Jan 30 13:03:55.354059 sshd[3351]: Connection closed by 10.0.0.1 port 43888 Jan 30 13:03:55.354448 sshd-session[3315]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:55.357404 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:03:55.357574 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:43888.service: Deactivated successfully. Jan 30 13:03:55.359514 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:03:55.361138 systemd-logind[1426]: Removed session 8. Jan 30 13:03:55.397690 kubelet[2614]: I0130 13:03:55.397650 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdmx\" (UniqueName: \"kubernetes.io/projected/c4ab6b27-5631-43cc-983b-d658210322e8-kube-api-access-nxdmx\") pod \"coredns-7db6d8ff4d-9sr89\" (UID: \"c4ab6b27-5631-43cc-983b-d658210322e8\") " pod="kube-system/coredns-7db6d8ff4d-9sr89" Jan 30 13:03:55.397690 kubelet[2614]: I0130 13:03:55.397693 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0552e283-6285-4240-8dc2-8749fb1ffe38-config-volume\") pod \"coredns-7db6d8ff4d-mlnzp\" (UID: \"0552e283-6285-4240-8dc2-8749fb1ffe38\") " pod="kube-system/coredns-7db6d8ff4d-mlnzp" Jan 30 13:03:55.397690 kubelet[2614]: I0130 13:03:55.397725 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxld2\" (UniqueName: \"kubernetes.io/projected/0552e283-6285-4240-8dc2-8749fb1ffe38-kube-api-access-wxld2\") pod \"coredns-7db6d8ff4d-mlnzp\" (UID: \"0552e283-6285-4240-8dc2-8749fb1ffe38\") " pod="kube-system/coredns-7db6d8ff4d-mlnzp" Jan 30 13:03:55.397690 kubelet[2614]: I0130 13:03:55.397744 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4ab6b27-5631-43cc-983b-d658210322e8-config-volume\") pod \"coredns-7db6d8ff4d-9sr89\" (UID: \"c4ab6b27-5631-43cc-983b-d658210322e8\") " pod="kube-system/coredns-7db6d8ff4d-9sr89" Jan 30 13:03:55.621183 kubelet[2614]: E0130 13:03:55.621066 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:55.622208 containerd[1446]: time="2025-01-30T13:03:55.621897946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9sr89,Uid:c4ab6b27-5631-43cc-983b-d658210322e8,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:55.629380 kubelet[2614]: E0130 13:03:55.629344 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:55.630561 containerd[1446]: time="2025-01-30T13:03:55.630344037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlnzp,Uid:0552e283-6285-4240-8dc2-8749fb1ffe38,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:56.051459 kubelet[2614]: E0130 13:03:56.051423 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:56.067558 kubelet[2614]: I0130 13:03:56.067477 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bstts" podStartSLOduration=6.184653492 podStartE2EDuration="13.067460748s" podCreationTimestamp="2025-01-30 13:03:43 +0000 UTC" firstStartedPulling="2025-01-30 13:03:44.383398851 +0000 UTC m=+17.565287372" lastFinishedPulling="2025-01-30 13:03:51.266206147 +0000 UTC m=+24.448094628" observedRunningTime="2025-01-30 13:03:56.065938193 +0000 UTC m=+29.247826714" watchObservedRunningTime="2025-01-30 13:03:56.067460748 +0000 UTC m=+29.249349269" Jan 30 13:03:57.060748 kubelet[2614]: E0130 13:03:57.060706 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:57.317447 systemd-networkd[1392]: cilium_host: Link UP Jan 30 13:03:57.317831 systemd-networkd[1392]: cilium_net: Link UP Jan 30 13:03:57.317834 systemd-networkd[1392]: cilium_net: Gained carrier Jan 30 13:03:57.318002 systemd-networkd[1392]: cilium_host: Gained carrier Jan 30 13:03:57.419511 systemd-networkd[1392]: cilium_vxlan: Link UP Jan 30 13:03:57.419519 systemd-networkd[1392]: cilium_vxlan: Gained carrier Jan 30 13:03:57.692572 systemd-networkd[1392]: cilium_host: Gained IPv6LL Jan 30 13:03:57.802441 kernel: NET: Registered PF_ALG protocol family Jan 30 13:03:58.045623 systemd-networkd[1392]: cilium_net: Gained IPv6LL Jan 30 13:03:58.055652 kubelet[2614]: E0130 13:03:58.055469 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:58.533668 systemd-networkd[1392]: lxc_health: Link UP Jan 30 13:03:58.540602 systemd-networkd[1392]: lxc_health: Gained carrier Jan 30 13:03:58.726266 systemd-networkd[1392]: lxc8b8f7a9f1ecf: Link UP Jan 30 13:03:58.739395 kernel: eth0: renamed from tmp08cc3 Jan 30 13:03:58.757839 kernel: eth0: renamed from tmp3e6b0 Jan 30 13:03:58.757457 systemd-networkd[1392]: lxc8b8f7a9f1ecf: Gained carrier Jan 30 13:03:58.757789 systemd-networkd[1392]: lxc7f9a801c461d: Link UP Jan 30 13:03:58.768849 systemd-networkd[1392]: lxc7f9a801c461d: Gained carrier Jan 30 13:03:59.455538 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Jan 30 13:03:59.580824 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 30 13:04:00.312097 kubelet[2614]: E0130 13:04:00.312054 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:00.348568 systemd-networkd[1392]: lxc7f9a801c461d: Gained IPv6LL Jan 30 13:04:00.370310 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). Jan 30 13:04:00.432827 sshd[3853]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:00.433353 sshd-session[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:00.438735 systemd-logind[1426]: New session 9 of user core. Jan 30 13:04:00.446548 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:04:00.587267 sshd[3855]: Connection closed by 10.0.0.1 port 43946 Jan 30 13:04:00.586323 sshd-session[3853]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:00.589146 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:43946.service: Deactivated successfully. Jan 30 13:04:00.593263 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:04:00.595013 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:04:00.596307 systemd-logind[1426]: Removed session 9. Jan 30 13:04:00.733581 systemd-networkd[1392]: lxc8b8f7a9f1ecf: Gained IPv6LL Jan 30 13:04:01.074120 kubelet[2614]: E0130 13:04:01.074076 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:02.075077 kubelet[2614]: E0130 13:04:02.075007 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:02.485330 containerd[1446]: time="2025-01-30T13:04:02.479847233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:02.485330 containerd[1446]: time="2025-01-30T13:04:02.479898793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:02.485330 containerd[1446]: time="2025-01-30T13:04:02.479909673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:02.485330 containerd[1446]: time="2025-01-30T13:04:02.479971233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:02.491119 containerd[1446]: time="2025-01-30T13:04:02.490197010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:02.491119 containerd[1446]: time="2025-01-30T13:04:02.490263890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:02.491119 containerd[1446]: time="2025-01-30T13:04:02.490279090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:02.491580 containerd[1446]: time="2025-01-30T13:04:02.490441930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:02.502592 systemd[1]: Started cri-containerd-08cc3f403e6684d5eb7e5e8dd736ff7456b66795b9091d0bef1feb7691872a3d.scope - libcontainer container 08cc3f403e6684d5eb7e5e8dd736ff7456b66795b9091d0bef1feb7691872a3d. Jan 30 13:04:02.509868 systemd[1]: Started cri-containerd-3e6b0c970d3beaa0f10ef7ed9836c7fe9b7b95fec2959041dbbca9ae41b015f7.scope - libcontainer container 3e6b0c970d3beaa0f10ef7ed9836c7fe9b7b95fec2959041dbbca9ae41b015f7. Jan 30 13:04:02.516740 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:04:02.522014 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:04:02.537020 containerd[1446]: time="2025-01-30T13:04:02.536974468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mlnzp,Uid:0552e283-6285-4240-8dc2-8749fb1ffe38,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cc3f403e6684d5eb7e5e8dd736ff7456b66795b9091d0bef1feb7691872a3d\"" Jan 30 13:04:02.538812 kubelet[2614]: E0130 13:04:02.538378 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:02.542584 containerd[1446]: time="2025-01-30T13:04:02.542310896Z" level=info msg="CreateContainer within sandbox \"08cc3f403e6684d5eb7e5e8dd736ff7456b66795b9091d0bef1feb7691872a3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:04:02.544428 containerd[1446]: time="2025-01-30T13:04:02.544394932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9sr89,Uid:c4ab6b27-5631-43cc-983b-d658210322e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e6b0c970d3beaa0f10ef7ed9836c7fe9b7b95fec2959041dbbca9ae41b015f7\"" Jan 30 13:04:02.545206 kubelet[2614]: E0130 13:04:02.545143 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:02.547214 containerd[1446]: time="2025-01-30T13:04:02.547114726Z" level=info msg="CreateContainer within sandbox \"3e6b0c970d3beaa0f10ef7ed9836c7fe9b7b95fec2959041dbbca9ae41b015f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:04:02.642613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461389548.mount: Deactivated successfully. Jan 30 13:04:02.645804 containerd[1446]: time="2025-01-30T13:04:02.645523230Z" level=info msg="CreateContainer within sandbox \"3e6b0c970d3beaa0f10ef7ed9836c7fe9b7b95fec2959041dbbca9ae41b015f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d13ed252183d293bc1fa5d8a83d561bf417b6335548521df06ba5f3084f6a40\"" Jan 30 13:04:02.647138 containerd[1446]: time="2025-01-30T13:04:02.646957947Z" level=info msg="StartContainer for \"3d13ed252183d293bc1fa5d8a83d561bf417b6335548521df06ba5f3084f6a40\"" Jan 30 13:04:02.658414 containerd[1446]: time="2025-01-30T13:04:02.657889203Z" level=info msg="CreateContainer within sandbox \"08cc3f403e6684d5eb7e5e8dd736ff7456b66795b9091d0bef1feb7691872a3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ee005fe541f626230fe305bec54dbfba07ea9790ad0d9b41221171c45806fff\"" Jan 30 13:04:02.658598 containerd[1446]: time="2025-01-30T13:04:02.658568042Z" level=info msg="StartContainer for \"0ee005fe541f626230fe305bec54dbfba07ea9790ad0d9b41221171c45806fff\"" Jan 30 13:04:02.674585 systemd[1]: Started cri-containerd-3d13ed252183d293bc1fa5d8a83d561bf417b6335548521df06ba5f3084f6a40.scope - libcontainer container 3d13ed252183d293bc1fa5d8a83d561bf417b6335548521df06ba5f3084f6a40. Jan 30 13:04:02.695705 systemd[1]: Started cri-containerd-0ee005fe541f626230fe305bec54dbfba07ea9790ad0d9b41221171c45806fff.scope - libcontainer container 0ee005fe541f626230fe305bec54dbfba07ea9790ad0d9b41221171c45806fff. Jan 30 13:04:02.719833 containerd[1446]: time="2025-01-30T13:04:02.719779628Z" level=info msg="StartContainer for \"3d13ed252183d293bc1fa5d8a83d561bf417b6335548521df06ba5f3084f6a40\" returns successfully" Jan 30 13:04:02.729875 containerd[1446]: time="2025-01-30T13:04:02.729832246Z" level=info msg="StartContainer for \"0ee005fe541f626230fe305bec54dbfba07ea9790ad0d9b41221171c45806fff\" returns successfully" Jan 30 13:04:03.077961 kubelet[2614]: E0130 13:04:03.077611 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:03.083492 kubelet[2614]: E0130 13:04:03.083440 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:03.097966 kubelet[2614]: I0130 13:04:03.097453 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9sr89" podStartSLOduration=20.097432374 podStartE2EDuration="20.097432374s" podCreationTimestamp="2025-01-30 13:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:04:03.09474398 +0000 UTC m=+36.276632541" watchObservedRunningTime="2025-01-30 13:04:03.097432374 +0000 UTC m=+36.279320895" Jan 30 13:04:04.086481 kubelet[2614]: E0130 13:04:04.086439 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:04.087556 kubelet[2614]: E0130 13:04:04.087521 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:05.088888 kubelet[2614]: E0130 13:04:05.088633 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:05.088888 kubelet[2614]: E0130 13:04:05.088740 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:05.603015 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:55288.service - OpenSSH per-connection server daemon (10.0.0.1:55288). Jan 30 13:04:05.660124 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 55288 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:05.661079 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:05.666271 systemd-logind[1426]: New session 10 of user core. Jan 30 13:04:05.676605 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:04:05.812871 sshd[4046]: Connection closed by 10.0.0.1 port 55288 Jan 30 13:04:05.813458 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:05.816780 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:55288.service: Deactivated successfully. Jan 30 13:04:05.818926 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:04:05.819687 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:04:05.820886 systemd-logind[1426]: Removed session 10. Jan 30 13:04:10.829324 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:55316.service - OpenSSH per-connection server daemon (10.0.0.1:55316). Jan 30 13:04:10.891229 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 55316 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:10.892657 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:10.896542 systemd-logind[1426]: New session 11 of user core. Jan 30 13:04:10.908774 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:04:11.039868 sshd[4063]: Connection closed by 10.0.0.1 port 55316 Jan 30 13:04:11.040407 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:11.054062 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:55316.service: Deactivated successfully. Jan 30 13:04:11.057315 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:04:11.061956 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:04:11.077008 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:55320.service - OpenSSH per-connection server daemon (10.0.0.1:55320). Jan 30 13:04:11.079017 systemd-logind[1426]: Removed session 11. Jan 30 13:04:11.113661 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 55320 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:11.115252 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:11.124584 systemd-logind[1426]: New session 12 of user core. Jan 30 13:04:11.135778 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:04:11.303513 sshd[4079]: Connection closed by 10.0.0.1 port 55320 Jan 30 13:04:11.304080 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:11.311035 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:55320.service: Deactivated successfully. Jan 30 13:04:11.314752 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:04:11.318486 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:04:11.329330 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:55332.service - OpenSSH per-connection server daemon (10.0.0.1:55332). Jan 30 13:04:11.331800 systemd-logind[1426]: Removed session 12. Jan 30 13:04:11.378339 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 55332 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:11.379714 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:11.384442 systemd-logind[1426]: New session 13 of user core. Jan 30 13:04:11.395565 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:04:11.513531 sshd[4092]: Connection closed by 10.0.0.1 port 55332 Jan 30 13:04:11.515670 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:11.522114 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:55332.service: Deactivated successfully. Jan 30 13:04:11.525964 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:04:11.527425 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:04:11.528313 systemd-logind[1426]: Removed session 13. Jan 30 13:04:16.529858 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:57930.service - OpenSSH per-connection server daemon (10.0.0.1:57930). Jan 30 13:04:16.582046 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 57930 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:16.583467 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:16.588541 systemd-logind[1426]: New session 14 of user core. Jan 30 13:04:16.597594 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:04:16.726735 sshd[4109]: Connection closed by 10.0.0.1 port 57930 Jan 30 13:04:16.727114 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:16.731037 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:57930.service: Deactivated successfully. Jan 30 13:04:16.733077 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:04:16.734972 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:04:16.736291 systemd-logind[1426]: Removed session 14. Jan 30 13:04:21.740954 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:57944.service - OpenSSH per-connection server daemon (10.0.0.1:57944). Jan 30 13:04:21.788044 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 57944 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:21.789475 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:21.793786 systemd-logind[1426]: New session 15 of user core. Jan 30 13:04:21.803665 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:04:21.926809 sshd[4124]: Connection closed by 10.0.0.1 port 57944 Jan 30 13:04:21.927408 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:21.941082 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:57944.service: Deactivated successfully. Jan 30 13:04:21.944810 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:04:21.946201 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:04:21.947629 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:57950.service - OpenSSH per-connection server daemon (10.0.0.1:57950). Jan 30 13:04:21.949198 systemd-logind[1426]: Removed session 15. Jan 30 13:04:21.990431 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 57950 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:21.990967 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:21.995275 systemd-logind[1426]: New session 16 of user core. Jan 30 13:04:22.002617 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:04:22.262501 sshd[4139]: Connection closed by 10.0.0.1 port 57950 Jan 30 13:04:22.264002 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:22.272127 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:57950.service: Deactivated successfully. Jan 30 13:04:22.275014 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:04:22.276486 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:04:22.277672 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:57976.service - OpenSSH per-connection server daemon (10.0.0.1:57976). Jan 30 13:04:22.279543 systemd-logind[1426]: Removed session 16. Jan 30 13:04:22.340740 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 57976 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:22.342139 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:22.346122 systemd-logind[1426]: New session 17 of user core. Jan 30 13:04:22.350598 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:04:23.678041 sshd[4153]: Connection closed by 10.0.0.1 port 57976 Jan 30 13:04:23.679101 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:23.691518 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:57976.service: Deactivated successfully. Jan 30 13:04:23.693190 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:04:23.696658 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:04:23.707721 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:44680.service - OpenSSH per-connection server daemon (10.0.0.1:44680). Jan 30 13:04:23.711852 systemd-logind[1426]: Removed session 17. Jan 30 13:04:23.744694 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 44680 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:23.746684 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:23.750270 systemd-logind[1426]: New session 18 of user core. Jan 30 13:04:23.767661 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:04:23.991434 sshd[4173]: Connection closed by 10.0.0.1 port 44680 Jan 30 13:04:23.992869 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:24.002258 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:44680.service: Deactivated successfully. Jan 30 13:04:24.004893 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:04:24.008048 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:04:24.017742 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:44684.service - OpenSSH per-connection server daemon (10.0.0.1:44684). Jan 30 13:04:24.019222 systemd-logind[1426]: Removed session 18. Jan 30 13:04:24.054326 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 44684 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:24.055752 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:24.060355 systemd-logind[1426]: New session 19 of user core. Jan 30 13:04:24.067594 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:04:24.180281 sshd[4185]: Connection closed by 10.0.0.1 port 44684 Jan 30 13:04:24.180741 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:24.184813 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:44684.service: Deactivated successfully. Jan 30 13:04:24.186991 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:04:24.187811 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:04:24.188847 systemd-logind[1426]: Removed session 19. Jan 30 13:04:29.191636 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:44770.service - OpenSSH per-connection server daemon (10.0.0.1:44770). Jan 30 13:04:29.241674 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 44770 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:29.243553 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:29.248791 systemd-logind[1426]: New session 20 of user core. Jan 30 13:04:29.261588 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:04:29.384407 sshd[4205]: Connection closed by 10.0.0.1 port 44770 Jan 30 13:04:29.384546 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:29.388548 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:44770.service: Deactivated successfully. Jan 30 13:04:29.390240 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:04:29.390832 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:04:29.391686 systemd-logind[1426]: Removed session 20. Jan 30 13:04:34.399414 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:40186.service - OpenSSH per-connection server daemon (10.0.0.1:40186). Jan 30 13:04:34.443447 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 40186 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:34.444908 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:34.448831 systemd-logind[1426]: New session 21 of user core. Jan 30 13:04:34.463485 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:04:34.601974 sshd[4220]: Connection closed by 10.0.0.1 port 40186 Jan 30 13:04:34.602394 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:34.605246 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:40186.service: Deactivated successfully. Jan 30 13:04:34.607104 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:04:34.611491 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:04:34.612637 systemd-logind[1426]: Removed session 21. Jan 30 13:04:39.614002 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Jan 30 13:04:39.653221 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:39.654659 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:39.659060 systemd-logind[1426]: New session 22 of user core. Jan 30 13:04:39.667585 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:04:39.797316 sshd[4234]: Connection closed by 10.0.0.1 port 40190 Jan 30 13:04:39.796409 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:39.809976 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:40190.service: Deactivated successfully. Jan 30 13:04:39.812463 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:04:39.813777 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:04:39.815626 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:40194.service - OpenSSH per-connection server daemon (10.0.0.1:40194). Jan 30 13:04:39.819299 systemd-logind[1426]: Removed session 22. Jan 30 13:04:39.860783 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 40194 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:39.862174 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:39.865869 systemd-logind[1426]: New session 23 of user core. Jan 30 13:04:39.879589 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:04:41.219225 kubelet[2614]: I0130 13:04:41.218782 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mlnzp" podStartSLOduration=58.218766254 podStartE2EDuration="58.218766254s" podCreationTimestamp="2025-01-30 13:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:04:03.125980956 +0000 UTC m=+36.307869477" watchObservedRunningTime="2025-01-30 13:04:41.218766254 +0000 UTC m=+74.400654775" Jan 30 13:04:41.234008 containerd[1446]: time="2025-01-30T13:04:41.233963077Z" level=info msg="StopContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" with timeout 30 (s)" Jan 30 13:04:41.234704 containerd[1446]: time="2025-01-30T13:04:41.234585437Z" level=info msg="Stop container \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" with signal terminated" Jan 30 13:04:41.247097 systemd[1]: cri-containerd-ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4.scope: Deactivated successfully. Jan 30 13:04:41.265806 containerd[1446]: time="2025-01-30T13:04:41.265770283Z" level=info msg="StopContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" with timeout 2 (s)" Jan 30 13:04:41.266969 containerd[1446]: time="2025-01-30T13:04:41.266939485Z" level=info msg="Stop container \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" with signal terminated" Jan 30 13:04:41.270869 containerd[1446]: time="2025-01-30T13:04:41.270813731Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:04:41.274721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4-rootfs.mount: Deactivated successfully. Jan 30 13:04:41.275004 systemd-networkd[1392]: lxc_health: Link DOWN Jan 30 13:04:41.275007 systemd-networkd[1392]: lxc_health: Lost carrier Jan 30 13:04:41.285288 containerd[1446]: time="2025-01-30T13:04:41.285223832Z" level=info msg="shim disconnected" id=ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4 namespace=k8s.io Jan 30 13:04:41.285288 containerd[1446]: time="2025-01-30T13:04:41.285279912Z" level=warning msg="cleaning up after shim disconnected" id=ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4 namespace=k8s.io Jan 30 13:04:41.285288 containerd[1446]: time="2025-01-30T13:04:41.285289072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.301273 systemd[1]: cri-containerd-adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d.scope: Deactivated successfully. Jan 30 13:04:41.301647 systemd[1]: cri-containerd-adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d.scope: Consumed 7.000s CPU time. Jan 30 13:04:41.318759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d-rootfs.mount: Deactivated successfully. Jan 30 13:04:41.325292 containerd[1446]: time="2025-01-30T13:04:41.325229851Z" level=info msg="shim disconnected" id=adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d namespace=k8s.io Jan 30 13:04:41.325292 containerd[1446]: time="2025-01-30T13:04:41.325285771Z" level=warning msg="cleaning up after shim disconnected" id=adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d namespace=k8s.io Jan 30 13:04:41.325292 containerd[1446]: time="2025-01-30T13:04:41.325293971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.333331 containerd[1446]: time="2025-01-30T13:04:41.333287183Z" level=info msg="StopContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" returns successfully" Jan 30 13:04:41.336763 containerd[1446]: time="2025-01-30T13:04:41.336723788Z" level=info msg="StopPodSandbox for \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\"" Jan 30 13:04:41.336855 containerd[1446]: time="2025-01-30T13:04:41.336780428Z" level=info msg="Container to stop \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.338567 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce-shm.mount: Deactivated successfully. Jan 30 13:04:41.340982 containerd[1446]: time="2025-01-30T13:04:41.340937234Z" level=info msg="StopContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" returns successfully" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345322601Z" level=info msg="StopPodSandbox for \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\"" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345405841Z" level=info msg="Container to stop \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345416881Z" level=info msg="Container to stop \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345426161Z" level=info msg="Container to stop \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345434441Z" level=info msg="Container to stop \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.346405 containerd[1446]: time="2025-01-30T13:04:41.345453241Z" level=info msg="Container to stop \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.347250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b-shm.mount: Deactivated successfully. Jan 30 13:04:41.348129 systemd[1]: cri-containerd-4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce.scope: Deactivated successfully. Jan 30 13:04:41.351668 systemd[1]: cri-containerd-a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b.scope: Deactivated successfully. Jan 30 13:04:41.377830 containerd[1446]: time="2025-01-30T13:04:41.377748569Z" level=info msg="shim disconnected" id=4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce namespace=k8s.io Jan 30 13:04:41.377830 containerd[1446]: time="2025-01-30T13:04:41.377824449Z" level=warning msg="cleaning up after shim disconnected" id=4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce namespace=k8s.io Jan 30 13:04:41.377830 containerd[1446]: time="2025-01-30T13:04:41.377833249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.387809 containerd[1446]: time="2025-01-30T13:04:41.387749663Z" level=info msg="shim disconnected" id=a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b namespace=k8s.io Jan 30 13:04:41.387809 containerd[1446]: time="2025-01-30T13:04:41.387806903Z" level=warning msg="cleaning up after shim disconnected" id=a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b namespace=k8s.io Jan 30 13:04:41.387809 containerd[1446]: time="2025-01-30T13:04:41.387815663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.394233 containerd[1446]: time="2025-01-30T13:04:41.394195873Z" level=info msg="TearDown network for sandbox \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\" successfully" Jan 30 13:04:41.394428 containerd[1446]: time="2025-01-30T13:04:41.394384153Z" level=info msg="StopPodSandbox for \"4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce\" returns successfully" Jan 30 13:04:41.402932 containerd[1446]: time="2025-01-30T13:04:41.402834846Z" level=info msg="TearDown network for sandbox \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" successfully" Jan 30 13:04:41.402932 containerd[1446]: time="2025-01-30T13:04:41.402866566Z" level=info msg="StopPodSandbox for \"a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b\" returns successfully" Jan 30 13:04:41.503619 kubelet[2614]: I0130 13:04:41.503493 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-etc-cni-netd\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503619 kubelet[2614]: I0130 13:04:41.503552 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-lib-modules\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503619 kubelet[2614]: I0130 13:04:41.503582 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-net\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503619 kubelet[2614]: I0130 13:04:41.503610 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-cgroup\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503626 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-run\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503641 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-bpf-maps\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503655 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-kernel\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503671 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cni-path\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503692 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e222a86-ac88-4096-ac1e-0eed30cf8a29-cilium-config-path\") pod \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\" (UID: \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\") " Jan 30 13:04:41.503818 kubelet[2614]: I0130 13:04:41.503705 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-hostproc\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503721 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-hubble-tls\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503737 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42faffb4-debe-4df5-9510-5009476b4235-cilium-config-path\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503752 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-xtables-lock\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503770 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmpf5\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-kube-api-access-jmpf5\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503788 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5b7m\" (UniqueName: \"kubernetes.io/projected/3e222a86-ac88-4096-ac1e-0eed30cf8a29-kube-api-access-l5b7m\") pod \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\" (UID: \"3e222a86-ac88-4096-ac1e-0eed30cf8a29\") " Jan 30 13:04:41.503941 kubelet[2614]: I0130 13:04:41.503807 2614 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42faffb4-debe-4df5-9510-5009476b4235-clustermesh-secrets\") pod \"42faffb4-debe-4df5-9510-5009476b4235\" (UID: \"42faffb4-debe-4df5-9510-5009476b4235\") " Jan 30 13:04:41.510417 kubelet[2614]: I0130 13:04:41.510242 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cni-path" (OuterVolumeSpecName: "cni-path") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510417 kubelet[2614]: I0130 13:04:41.510324 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510417 kubelet[2614]: I0130 13:04:41.510342 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510417 kubelet[2614]: I0130 13:04:41.510356 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510417 kubelet[2614]: I0130 13:04:41.510390 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510627 kubelet[2614]: I0130 13:04:41.510405 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510627 kubelet[2614]: I0130 13:04:41.510421 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510627 kubelet[2614]: I0130 13:04:41.510434 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.510627 kubelet[2614]: I0130 13:04:41.510460 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-hostproc" (OuterVolumeSpecName: "hostproc") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.516145 kubelet[2614]: I0130 13:04:41.515853 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42faffb4-debe-4df5-9510-5009476b4235-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:41.517916 kubelet[2614]: I0130 13:04:41.517878 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e222a86-ac88-4096-ac1e-0eed30cf8a29-kube-api-access-l5b7m" (OuterVolumeSpecName: "kube-api-access-l5b7m") pod "3e222a86-ac88-4096-ac1e-0eed30cf8a29" (UID: "3e222a86-ac88-4096-ac1e-0eed30cf8a29"). InnerVolumeSpecName "kube-api-access-l5b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:41.517916 kubelet[2614]: I0130 13:04:41.517880 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-kube-api-access-jmpf5" (OuterVolumeSpecName: "kube-api-access-jmpf5") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "kube-api-access-jmpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:41.518101 kubelet[2614]: I0130 13:04:41.518081 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:41.518212 kubelet[2614]: I0130 13:04:41.518172 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.518801 kubelet[2614]: I0130 13:04:41.518763 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e222a86-ac88-4096-ac1e-0eed30cf8a29-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e222a86-ac88-4096-ac1e-0eed30cf8a29" (UID: "3e222a86-ac88-4096-ac1e-0eed30cf8a29"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:41.519589 kubelet[2614]: I0130 13:04:41.519546 2614 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42faffb4-debe-4df5-9510-5009476b4235-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42faffb4-debe-4df5-9510-5009476b4235" (UID: "42faffb4-debe-4df5-9510-5009476b4235"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604924 2614 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604960 2614 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604971 2614 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604979 2614 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604990 2614 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.604999 2614 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e222a86-ac88-4096-ac1e-0eed30cf8a29-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.605006 2614 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605115 kubelet[2614]: I0130 13:04:41.605013 2614 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605020 2614 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42faffb4-debe-4df5-9510-5009476b4235-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605027 2614 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605035 2614 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jmpf5\" (UniqueName: \"kubernetes.io/projected/42faffb4-debe-4df5-9510-5009476b4235-kube-api-access-jmpf5\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605053 2614 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l5b7m\" (UniqueName: \"kubernetes.io/projected/3e222a86-ac88-4096-ac1e-0eed30cf8a29-kube-api-access-l5b7m\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605060 2614 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42faffb4-debe-4df5-9510-5009476b4235-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605067 2614 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605074 2614 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.605413 kubelet[2614]: I0130 13:04:41.605081 2614 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42faffb4-debe-4df5-9510-5009476b4235-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:04:41.990005 kubelet[2614]: E0130 13:04:41.989961 2614 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:04:42.174841 kubelet[2614]: I0130 13:04:42.174500 2614 scope.go:117] "RemoveContainer" containerID="ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4" Jan 30 13:04:42.176895 containerd[1446]: time="2025-01-30T13:04:42.176739140Z" level=info msg="RemoveContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\"" Jan 30 13:04:42.182687 systemd[1]: Removed slice kubepods-besteffort-pod3e222a86_ac88_4096_ac1e_0eed30cf8a29.slice - libcontainer container kubepods-besteffort-pod3e222a86_ac88_4096_ac1e_0eed30cf8a29.slice. Jan 30 13:04:42.183565 containerd[1446]: time="2025-01-30T13:04:42.183522069Z" level=info msg="RemoveContainer for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" returns successfully" Jan 30 13:04:42.184475 kubelet[2614]: I0130 13:04:42.184439 2614 scope.go:117] "RemoveContainer" containerID="ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4" Jan 30 13:04:42.184483 systemd[1]: Removed slice kubepods-burstable-pod42faffb4_debe_4df5_9510_5009476b4235.slice - libcontainer container kubepods-burstable-pod42faffb4_debe_4df5_9510_5009476b4235.slice. Jan 30 13:04:42.184888 containerd[1446]: time="2025-01-30T13:04:42.184726351Z" level=error msg="ContainerStatus for \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\": not found" Jan 30 13:04:42.184584 systemd[1]: kubepods-burstable-pod42faffb4_debe_4df5_9510_5009476b4235.slice: Consumed 7.140s CPU time. Jan 30 13:04:42.185399 kubelet[2614]: E0130 13:04:42.185087 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\": not found" containerID="ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4" Jan 30 13:04:42.185399 kubelet[2614]: I0130 13:04:42.185117 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4"} err="failed to get container status \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab8c304b9cf848539498284e4ad782196a11653606f3a50279fb2fbe918908c4\": not found" Jan 30 13:04:42.185399 kubelet[2614]: I0130 13:04:42.185205 2614 scope.go:117] "RemoveContainer" containerID="adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d" Jan 30 13:04:42.186733 containerd[1446]: time="2025-01-30T13:04:42.186467874Z" level=info msg="RemoveContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\"" Jan 30 13:04:42.189759 containerd[1446]: time="2025-01-30T13:04:42.189655558Z" level=info msg="RemoveContainer for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" returns successfully" Jan 30 13:04:42.190667 kubelet[2614]: I0130 13:04:42.190100 2614 scope.go:117] "RemoveContainer" containerID="46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca" Jan 30 13:04:42.191346 containerd[1446]: time="2025-01-30T13:04:42.191314401Z" level=info msg="RemoveContainer for \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\"" Jan 30 13:04:42.196419 containerd[1446]: time="2025-01-30T13:04:42.196355848Z" level=info msg="RemoveContainer for \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\" returns successfully" Jan 30 13:04:42.196718 kubelet[2614]: I0130 13:04:42.196629 2614 scope.go:117] "RemoveContainer" containerID="a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3" Jan 30 13:04:42.198531 containerd[1446]: time="2025-01-30T13:04:42.198300091Z" level=info msg="RemoveContainer for \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\"" Jan 30 13:04:42.212979 containerd[1446]: time="2025-01-30T13:04:42.212845751Z" level=info msg="RemoveContainer for \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\" returns successfully" Jan 30 13:04:42.213140 kubelet[2614]: I0130 13:04:42.213083 2614 scope.go:117] "RemoveContainer" containerID="d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22" Jan 30 13:04:42.214061 containerd[1446]: time="2025-01-30T13:04:42.214037553Z" level=info msg="RemoveContainer for \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\"" Jan 30 13:04:42.221070 containerd[1446]: time="2025-01-30T13:04:42.220965723Z" level=info msg="RemoveContainer for \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\" returns successfully" Jan 30 13:04:42.221749 kubelet[2614]: I0130 13:04:42.221727 2614 scope.go:117] "RemoveContainer" containerID="771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7" Jan 30 13:04:42.222945 containerd[1446]: time="2025-01-30T13:04:42.222671886Z" level=info msg="RemoveContainer for \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\"" Jan 30 13:04:42.225111 containerd[1446]: time="2025-01-30T13:04:42.225019729Z" level=info msg="RemoveContainer for \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\" returns successfully" Jan 30 13:04:42.225217 kubelet[2614]: I0130 13:04:42.225180 2614 scope.go:117] "RemoveContainer" containerID="adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d" Jan 30 13:04:42.225420 containerd[1446]: time="2025-01-30T13:04:42.225361249Z" level=error msg="ContainerStatus for \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\": not found" Jan 30 13:04:42.225528 kubelet[2614]: E0130 13:04:42.225514 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\": not found" containerID="adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d" Jan 30 13:04:42.225594 kubelet[2614]: I0130 13:04:42.225571 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d"} err="failed to get container status \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\": rpc error: code = NotFound desc = an error occurred when try to find container \"adf4f8fdf38801edc21194b46a8a2d51cdadf7d01761cdcac905d936cd69b39d\": not found" Jan 30 13:04:42.225594 kubelet[2614]: I0130 13:04:42.225594 2614 scope.go:117] "RemoveContainer" containerID="46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca" Jan 30 13:04:42.225777 containerd[1446]: time="2025-01-30T13:04:42.225726330Z" level=error msg="ContainerStatus for \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\": not found" Jan 30 13:04:42.225852 kubelet[2614]: E0130 13:04:42.225836 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\": not found" containerID="46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca" Jan 30 13:04:42.225896 kubelet[2614]: I0130 13:04:42.225857 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca"} err="failed to get container status \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"46146e487f7ccca4aee3467d7604b826b147dc90a1ae238103adbce2448214ca\": not found" Jan 30 13:04:42.225923 kubelet[2614]: I0130 13:04:42.225895 2614 scope.go:117] "RemoveContainer" containerID="a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3" Jan 30 13:04:42.226041 containerd[1446]: time="2025-01-30T13:04:42.226012170Z" level=error msg="ContainerStatus for \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\": not found" Jan 30 13:04:42.226128 kubelet[2614]: E0130 13:04:42.226114 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\": not found" containerID="a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3" Jan 30 13:04:42.226172 kubelet[2614]: I0130 13:04:42.226129 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3"} err="failed to get container status \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"a25b925dc71fd17157fade0c99efa7e4b07dea23c7da51e23ec5a9f3cce1cbc3\": not found" Jan 30 13:04:42.226172 kubelet[2614]: I0130 13:04:42.226141 2614 scope.go:117] "RemoveContainer" containerID="d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22" Jan 30 13:04:42.226456 containerd[1446]: time="2025-01-30T13:04:42.226358851Z" level=error msg="ContainerStatus for \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\": not found" Jan 30 13:04:42.226673 kubelet[2614]: E0130 13:04:42.226653 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\": not found" containerID="d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22" Jan 30 13:04:42.226719 kubelet[2614]: I0130 13:04:42.226676 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22"} err="failed to get container status \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9f0ffbd29a6e6bf0527be34298ed756d4a0dd70c117687387e4a38721abff22\": not found" Jan 30 13:04:42.226719 kubelet[2614]: I0130 13:04:42.226689 2614 scope.go:117] "RemoveContainer" containerID="771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7" Jan 30 13:04:42.226824 containerd[1446]: time="2025-01-30T13:04:42.226798332Z" level=error msg="ContainerStatus for \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\": not found" Jan 30 13:04:42.226915 kubelet[2614]: E0130 13:04:42.226895 2614 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\": not found" containerID="771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7" Jan 30 13:04:42.226955 kubelet[2614]: I0130 13:04:42.226921 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7"} err="failed to get container status \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"771810d437116d8ff467b425c1aca007a12449d9af4c865a2999a091a1dad1f7\": not found" Jan 30 13:04:42.246724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b07723a00c1454ed3aa5cbf1dca2b5aa3827a151ec882880c0e28190342a0ce-rootfs.mount: Deactivated successfully. Jan 30 13:04:42.246831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1e7a0a4f407cbf21d2565a6e9e750262b0d107a61e4ee0d8357dd5198048c7b-rootfs.mount: Deactivated successfully. Jan 30 13:04:42.246886 systemd[1]: var-lib-kubelet-pods-3e222a86\x2dac88\x2d4096\x2dac1e\x2d0eed30cf8a29-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5b7m.mount: Deactivated successfully. Jan 30 13:04:42.246940 systemd[1]: var-lib-kubelet-pods-42faffb4\x2ddebe\x2d4df5\x2d9510\x2d5009476b4235-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmpf5.mount: Deactivated successfully. Jan 30 13:04:42.246998 systemd[1]: var-lib-kubelet-pods-42faffb4\x2ddebe\x2d4df5\x2d9510\x2d5009476b4235-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:04:42.247051 systemd[1]: var-lib-kubelet-pods-42faffb4\x2ddebe\x2d4df5\x2d9510\x2d5009476b4235-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:04:42.915256 kubelet[2614]: I0130 13:04:42.915212 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e222a86-ac88-4096-ac1e-0eed30cf8a29" path="/var/lib/kubelet/pods/3e222a86-ac88-4096-ac1e-0eed30cf8a29/volumes" Jan 30 13:04:42.915669 kubelet[2614]: I0130 13:04:42.915639 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42faffb4-debe-4df5-9510-5009476b4235" path="/var/lib/kubelet/pods/42faffb4-debe-4df5-9510-5009476b4235/volumes" Jan 30 13:04:43.171386 sshd[4248]: Connection closed by 10.0.0.1 port 40194 Jan 30 13:04:43.172657 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:43.181041 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:40194.service: Deactivated successfully. Jan 30 13:04:43.182655 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:04:43.184080 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:04:43.200285 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:39610.service - OpenSSH per-connection server daemon (10.0.0.1:39610). Jan 30 13:04:43.201399 systemd-logind[1426]: Removed session 23. Jan 30 13:04:43.246397 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 39610 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:43.247038 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:43.251159 systemd-logind[1426]: New session 24 of user core. Jan 30 13:04:43.260582 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:04:44.481981 sshd[4408]: Connection closed by 10.0.0.1 port 39610 Jan 30 13:04:44.483441 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:44.492209 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:39610.service: Deactivated successfully. Jan 30 13:04:44.497512 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:04:44.497701 systemd[1]: session-24.scope: Consumed 1.102s CPU time. Jan 30 13:04:44.498732 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:04:44.508885 kubelet[2614]: I0130 13:04:44.508823 2614 topology_manager.go:215] "Topology Admit Handler" podUID="8f9dc4a8-f05d-430d-bc74-37007f99789b" podNamespace="kube-system" podName="cilium-zj2fz" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509140 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="mount-cgroup" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509157 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="apply-sysctl-overwrites" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509165 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="cilium-agent" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509172 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e222a86-ac88-4096-ac1e-0eed30cf8a29" containerName="cilium-operator" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509178 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="mount-bpf-fs" Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.509185 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="clean-cilium-state" Jan 30 13:04:44.514593 kubelet[2614]: I0130 13:04:44.509314 2614 memory_manager.go:354] "RemoveStaleState removing state" podUID="42faffb4-debe-4df5-9510-5009476b4235" containerName="cilium-agent" Jan 30 13:04:44.514593 kubelet[2614]: I0130 13:04:44.509329 2614 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e222a86-ac88-4096-ac1e-0eed30cf8a29" containerName="cilium-operator" Jan 30 13:04:44.514593 kubelet[2614]: W0130 13:04:44.513596 2614 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.514593 kubelet[2614]: E0130 13:04:44.513634 2614 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.513867 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:39624.service - OpenSSH per-connection server daemon (10.0.0.1:39624). Jan 30 13:04:44.514876 kubelet[2614]: W0130 13:04:44.513607 2614 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.514876 kubelet[2614]: E0130 13:04:44.513660 2614 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.516686 kubelet[2614]: W0130 13:04:44.515877 2614 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.516686 kubelet[2614]: E0130 13:04:44.515925 2614 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.516686 kubelet[2614]: W0130 13:04:44.516126 2614 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.516686 kubelet[2614]: E0130 13:04:44.516144 2614 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:04:44.521461 systemd-logind[1426]: Removed session 24. Jan 30 13:04:44.535762 systemd[1]: Created slice kubepods-burstable-pod8f9dc4a8_f05d_430d_bc74_37007f99789b.slice - libcontainer container kubepods-burstable-pod8f9dc4a8_f05d_430d_bc74_37007f99789b.slice. Jan 30 13:04:44.566187 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 39624 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:44.568286 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:44.576509 systemd-logind[1426]: New session 25 of user core. Jan 30 13:04:44.583636 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:04:44.624582 kubelet[2614]: I0130 13:04:44.624520 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-cilium-cgroup\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624582 kubelet[2614]: I0130 13:04:44.624574 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-xtables-lock\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624582 kubelet[2614]: I0130 13:04:44.624595 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f9dc4a8-f05d-430d-bc74-37007f99789b-cilium-config-path\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624612 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-host-proc-sys-net\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624630 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f9dc4a8-f05d-430d-bc74-37007f99789b-hubble-tls\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624645 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-hostproc\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624661 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-cni-path\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624676 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-lib-modules\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624775 kubelet[2614]: I0130 13:04:44.624693 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f9dc4a8-f05d-430d-bc74-37007f99789b-cilium-ipsec-secrets\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624710 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-host-proc-sys-kernel\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624725 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-bpf-maps\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624743 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrhv\" (UniqueName: \"kubernetes.io/projected/8f9dc4a8-f05d-430d-bc74-37007f99789b-kube-api-access-8jrhv\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624759 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-cilium-run\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624776 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f9dc4a8-f05d-430d-bc74-37007f99789b-etc-cni-netd\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.624897 kubelet[2614]: I0130 13:04:44.624791 2614 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f9dc4a8-f05d-430d-bc74-37007f99789b-clustermesh-secrets\") pod \"cilium-zj2fz\" (UID: \"8f9dc4a8-f05d-430d-bc74-37007f99789b\") " pod="kube-system/cilium-zj2fz" Jan 30 13:04:44.639707 sshd[4422]: Connection closed by 10.0.0.1 port 39624 Jan 30 13:04:44.640230 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:44.650464 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:39624.service: Deactivated successfully. Jan 30 13:04:44.652733 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:04:44.654248 systemd-logind[1426]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:04:44.661660 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:39640.service - OpenSSH per-connection server daemon (10.0.0.1:39640). Jan 30 13:04:44.663224 systemd-logind[1426]: Removed session 25. Jan 30 13:04:44.702158 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 39640 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:04:44.703606 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:04:44.708173 systemd-logind[1426]: New session 26 of user core. Jan 30 13:04:44.719582 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:04:45.729789 kubelet[2614]: E0130 13:04:45.729728 2614 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 30 13:04:45.730131 kubelet[2614]: E0130 13:04:45.729729 2614 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 13:04:45.730131 kubelet[2614]: E0130 13:04:45.729837 2614 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-zj2fz: failed to sync secret cache: timed out waiting for the condition Jan 30 13:04:45.730131 kubelet[2614]: E0130 13:04:45.729840 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f9dc4a8-f05d-430d-bc74-37007f99789b-clustermesh-secrets podName:8f9dc4a8-f05d-430d-bc74-37007f99789b nodeName:}" failed. No retries permitted until 2025-01-30 13:04:46.229815245 +0000 UTC m=+79.411703766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/8f9dc4a8-f05d-430d-bc74-37007f99789b-clustermesh-secrets") pod "cilium-zj2fz" (UID: "8f9dc4a8-f05d-430d-bc74-37007f99789b") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:04:45.730131 kubelet[2614]: E0130 13:04:45.729896 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f9dc4a8-f05d-430d-bc74-37007f99789b-hubble-tls podName:8f9dc4a8-f05d-430d-bc74-37007f99789b nodeName:}" failed. No retries permitted until 2025-01-30 13:04:46.229879685 +0000 UTC m=+79.411768206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/8f9dc4a8-f05d-430d-bc74-37007f99789b-hubble-tls") pod "cilium-zj2fz" (UID: "8f9dc4a8-f05d-430d-bc74-37007f99789b") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:04:46.343566 kubelet[2614]: E0130 13:04:46.343521 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:46.344107 containerd[1446]: time="2025-01-30T13:04:46.344062086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zj2fz,Uid:8f9dc4a8-f05d-430d-bc74-37007f99789b,Namespace:kube-system,Attempt:0,}" Jan 30 13:04:46.366740 containerd[1446]: time="2025-01-30T13:04:46.366623195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:46.366740 containerd[1446]: time="2025-01-30T13:04:46.366686196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:46.366740 containerd[1446]: time="2025-01-30T13:04:46.366702356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:46.366917 containerd[1446]: time="2025-01-30T13:04:46.366786636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:46.390599 systemd[1]: Started cri-containerd-7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9.scope - libcontainer container 7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9. Jan 30 13:04:46.414254 containerd[1446]: time="2025-01-30T13:04:46.414191657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zj2fz,Uid:8f9dc4a8-f05d-430d-bc74-37007f99789b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\"" Jan 30 13:04:46.415059 kubelet[2614]: E0130 13:04:46.415032 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:46.418039 containerd[1446]: time="2025-01-30T13:04:46.417349421Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:04:46.430676 containerd[1446]: time="2025-01-30T13:04:46.430613278Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939\"" Jan 30 13:04:46.431228 containerd[1446]: time="2025-01-30T13:04:46.431202319Z" level=info msg="StartContainer for \"94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939\"" Jan 30 13:04:46.473592 systemd[1]: Started cri-containerd-94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939.scope - libcontainer container 94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939. Jan 30 13:04:46.500870 containerd[1446]: time="2025-01-30T13:04:46.500812209Z" level=info msg="StartContainer for \"94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939\" returns successfully" Jan 30 13:04:46.517611 systemd[1]: cri-containerd-94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939.scope: Deactivated successfully. Jan 30 13:04:46.546111 containerd[1446]: time="2025-01-30T13:04:46.546037907Z" level=info msg="shim disconnected" id=94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939 namespace=k8s.io Jan 30 13:04:46.546111 containerd[1446]: time="2025-01-30T13:04:46.546096187Z" level=warning msg="cleaning up after shim disconnected" id=94f633100b68999aeb052434f05aa83f734fc7c4c75f6a8f243d7d070daae939 namespace=k8s.io Jan 30 13:04:46.546111 containerd[1446]: time="2025-01-30T13:04:46.546105787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:46.991601 kubelet[2614]: E0130 13:04:46.991548 2614 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:04:47.192722 kubelet[2614]: E0130 13:04:47.192163 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:47.196813 containerd[1446]: time="2025-01-30T13:04:47.196772260Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:04:47.209553 containerd[1446]: time="2025-01-30T13:04:47.209310635Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902\"" Jan 30 13:04:47.210737 containerd[1446]: time="2025-01-30T13:04:47.210708357Z" level=info msg="StartContainer for \"20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902\"" Jan 30 13:04:47.236566 systemd[1]: Started cri-containerd-20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902.scope - libcontainer container 20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902. Jan 30 13:04:47.260304 containerd[1446]: time="2025-01-30T13:04:47.259818419Z" level=info msg="StartContainer for \"20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902\" returns successfully" Jan 30 13:04:47.266877 systemd[1]: cri-containerd-20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902.scope: Deactivated successfully. Jan 30 13:04:47.296233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902-rootfs.mount: Deactivated successfully. Jan 30 13:04:47.316974 containerd[1446]: time="2025-01-30T13:04:47.316905410Z" level=info msg="shim disconnected" id=20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902 namespace=k8s.io Jan 30 13:04:47.317243 containerd[1446]: time="2025-01-30T13:04:47.317023571Z" level=warning msg="cleaning up after shim disconnected" id=20e3c35c40c531e52fd5f085c4fe72c10571635f3aeee0f810af1d50d3a90902 namespace=k8s.io Jan 30 13:04:47.317243 containerd[1446]: time="2025-01-30T13:04:47.317034611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:48.195882 kubelet[2614]: E0130 13:04:48.195853 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:48.200277 containerd[1446]: time="2025-01-30T13:04:48.200223794Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:04:48.217206 containerd[1446]: time="2025-01-30T13:04:48.216984454Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065\"" Jan 30 13:04:48.219803 containerd[1446]: time="2025-01-30T13:04:48.219335697Z" level=info msg="StartContainer for \"c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065\"" Jan 30 13:04:48.258636 systemd[1]: Started cri-containerd-c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065.scope - libcontainer container c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065. Jan 30 13:04:48.284918 containerd[1446]: time="2025-01-30T13:04:48.284867297Z" level=info msg="StartContainer for \"c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065\" returns successfully" Jan 30 13:04:48.287587 systemd[1]: cri-containerd-c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065.scope: Deactivated successfully. Jan 30 13:04:48.306840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065-rootfs.mount: Deactivated successfully. Jan 30 13:04:48.315005 containerd[1446]: time="2025-01-30T13:04:48.314937974Z" level=info msg="shim disconnected" id=c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065 namespace=k8s.io Jan 30 13:04:48.315005 containerd[1446]: time="2025-01-30T13:04:48.314998614Z" level=warning msg="cleaning up after shim disconnected" id=c16c11b93eff4512df0139fa8a267136125dc92fb8e1e90c3155c9018e2ab065 namespace=k8s.io Jan 30 13:04:48.315005 containerd[1446]: time="2025-01-30T13:04:48.315008774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:48.914442 kubelet[2614]: E0130 13:04:48.914040 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:49.140286 kubelet[2614]: I0130 13:04:49.140217 2614 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:04:49Z","lastTransitionTime":"2025-01-30T13:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:04:49.200946 kubelet[2614]: E0130 13:04:49.200674 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:49.203267 containerd[1446]: time="2025-01-30T13:04:49.203229535Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:04:49.246213 containerd[1446]: time="2025-01-30T13:04:49.246158746Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e\"" Jan 30 13:04:49.246764 containerd[1446]: time="2025-01-30T13:04:49.246678666Z" level=info msg="StartContainer for \"bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e\"" Jan 30 13:04:49.280618 systemd[1]: Started cri-containerd-bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e.scope - libcontainer container bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e. Jan 30 13:04:49.301322 systemd[1]: cri-containerd-bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e.scope: Deactivated successfully. Jan 30 13:04:49.304436 containerd[1446]: time="2025-01-30T13:04:49.304399735Z" level=info msg="StartContainer for \"bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e\" returns successfully" Jan 30 13:04:49.320615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e-rootfs.mount: Deactivated successfully. Jan 30 13:04:49.324893 containerd[1446]: time="2025-01-30T13:04:49.324811719Z" level=info msg="shim disconnected" id=bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e namespace=k8s.io Jan 30 13:04:49.324893 containerd[1446]: time="2025-01-30T13:04:49.324880600Z" level=warning msg="cleaning up after shim disconnected" id=bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e namespace=k8s.io Jan 30 13:04:49.324893 containerd[1446]: time="2025-01-30T13:04:49.324890200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:49.329384 containerd[1446]: time="2025-01-30T13:04:49.308494100Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f9dc4a8_f05d_430d_bc74_37007f99789b.slice/cri-containerd-bc3e94c5f1e9f9a7aee620add34445e30617a15768b5175c007c3edd8f59379e.scope/memory.events\": no such file or directory" Jan 30 13:04:49.913015 kubelet[2614]: E0130 13:04:49.912726 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:49.913015 kubelet[2614]: E0130 13:04:49.912918 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:50.205507 kubelet[2614]: E0130 13:04:50.205322 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:50.209171 containerd[1446]: time="2025-01-30T13:04:50.208959527Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:04:50.231199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830332496.mount: Deactivated successfully. Jan 30 13:04:50.238777 containerd[1446]: time="2025-01-30T13:04:50.238720602Z" level=info msg="CreateContainer within sandbox \"7067c67966f205e52834bf8412ae8e71b07b9a336b1520d530f6b187fdf920d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba\"" Jan 30 13:04:50.240231 containerd[1446]: time="2025-01-30T13:04:50.240194243Z" level=info msg="StartContainer for \"26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba\"" Jan 30 13:04:50.274823 systemd[1]: Started cri-containerd-26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba.scope - libcontainer container 26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba. Jan 30 13:04:50.311394 containerd[1446]: time="2025-01-30T13:04:50.311320206Z" level=info msg="StartContainer for \"26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba\" returns successfully" Jan 30 13:04:50.330229 systemd[1]: run-containerd-runc-k8s.io-26d1c2ec55f6ad954aef4f1417e880e8ce6e205825664dffc0477154c6c324ba-runc.MyjR6Z.mount: Deactivated successfully. Jan 30 13:04:50.595408 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:04:51.210149 kubelet[2614]: E0130 13:04:51.210075 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:51.227247 kubelet[2614]: I0130 13:04:51.226997 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zj2fz" podStartSLOduration=7.226980383 podStartE2EDuration="7.226980383s" podCreationTimestamp="2025-01-30 13:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:04:51.225108901 +0000 UTC m=+84.406997422" watchObservedRunningTime="2025-01-30 13:04:51.226980383 +0000 UTC m=+84.408868904" Jan 30 13:04:52.344795 kubelet[2614]: E0130 13:04:52.344584 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:53.585750 systemd-networkd[1392]: lxc_health: Link UP Jan 30 13:04:53.590408 systemd-networkd[1392]: lxc_health: Gained carrier Jan 30 13:04:54.346188 kubelet[2614]: E0130 13:04:54.346157 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:54.620554 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jan 30 13:04:55.223957 kubelet[2614]: E0130 13:04:55.223718 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:56.225314 kubelet[2614]: E0130 13:04:56.225085 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:59.862818 sshd[4432]: Connection closed by 10.0.0.1 port 39640 Jan 30 13:04:59.866026 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jan 30 13:04:59.875117 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:39640.service: Deactivated successfully. Jan 30 13:04:59.880252 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:04:59.883383 systemd-logind[1426]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:04:59.884842 systemd-logind[1426]: Removed session 26. Jan 30 13:05:00.914728 kubelet[2614]: E0130 13:05:00.914669 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"