Jan 13 21:22:49.865762 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:22:49.865782 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:49.865800 kernel: BIOS-provided physical RAM map: Jan 13 21:22:49.865807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:22:49.865813 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:22:49.865819 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:22:49.865826 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:22:49.865832 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:22:49.865838 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:22:49.865846 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:22:49.865853 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:22:49.865859 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:22:49.865865 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:22:49.865871 kernel: NX (Execute Disable) protection: active Jan 13 21:22:49.865879 kernel: APIC: Static calls initialized Jan 13 21:22:49.865888 kernel: SMBIOS 2.8 present. Jan 13 21:22:49.865894 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:22:49.865901 kernel: Hypervisor detected: KVM Jan 13 21:22:49.865908 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:22:49.865914 kernel: kvm-clock: using sched offset of 2158123102 cycles Jan 13 21:22:49.865921 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:22:49.865928 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:22:49.865935 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:22:49.865942 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:22:49.865949 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:22:49.865958 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:22:49.865965 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:22:49.865972 kernel: Using GB pages for direct mapping Jan 13 21:22:49.865979 kernel: ACPI: Early table checksum verification disabled Jan 13 21:22:49.865985 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:22:49.865992 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.865999 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866006 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866015 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:22:49.866022 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866029 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866035 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866042 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:22:49.866049 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:22:49.866056 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:22:49.866066 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:22:49.866076 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:22:49.866083 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:22:49.866090 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:22:49.866097 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:22:49.866104 kernel: No NUMA configuration found Jan 13 21:22:49.866111 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:22:49.866118 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:22:49.866127 kernel: Zone ranges: Jan 13 21:22:49.866135 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:22:49.866142 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:22:49.866149 kernel: Normal empty Jan 13 21:22:49.866156 kernel: Movable zone start for each node Jan 13 21:22:49.866163 kernel: Early memory node ranges Jan 13 21:22:49.866170 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:22:49.866177 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:22:49.866184 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:22:49.866193 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:22:49.866201 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:22:49.866208 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:22:49.866215 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:22:49.866222 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:22:49.866229 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:22:49.866236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:22:49.866243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:22:49.866250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:22:49.866260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:22:49.866267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:22:49.866274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:22:49.866281 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:22:49.866288 kernel: TSC deadline timer available Jan 13 21:22:49.866295 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:22:49.866302 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:22:49.866309 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:22:49.866316 kernel: kvm-guest: setup PV sched yield Jan 13 21:22:49.866323 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:22:49.866333 kernel: Booting paravirtualized kernel on KVM Jan 13 21:22:49.866341 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:22:49.866348 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:22:49.866355 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:22:49.866362 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:22:49.866369 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:22:49.866376 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:22:49.866383 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:22:49.866392 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:49.866402 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:22:49.866409 kernel: random: crng init done Jan 13 21:22:49.866416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:22:49.866424 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:22:49.866431 kernel: Fallback order for Node 0: 0 Jan 13 21:22:49.866438 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:22:49.866445 kernel: Policy zone: DMA32 Jan 13 21:22:49.866452 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:22:49.866462 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:22:49.866469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:22:49.866476 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:22:49.866483 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:22:49.866490 kernel: Dynamic Preempt: voluntary Jan 13 21:22:49.866497 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:22:49.866505 kernel: rcu: RCU event tracing is enabled. Jan 13 21:22:49.866512 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:22:49.866520 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:22:49.866541 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:22:49.866548 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:22:49.866567 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:22:49.866574 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:22:49.866582 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:22:49.866589 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:22:49.866596 kernel: Console: colour VGA+ 80x25 Jan 13 21:22:49.866603 kernel: printk: console [ttyS0] enabled Jan 13 21:22:49.866610 kernel: ACPI: Core revision 20230628 Jan 13 21:22:49.866620 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:22:49.866627 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:22:49.866634 kernel: x2apic enabled Jan 13 21:22:49.866641 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:22:49.866648 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:22:49.866656 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:22:49.866663 kernel: kvm-guest: setup PV IPIs Jan 13 21:22:49.866680 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:22:49.866687 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:22:49.866695 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:22:49.866702 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:22:49.866710 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:22:49.866719 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:22:49.866727 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:22:49.866734 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:22:49.866742 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:22:49.866752 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:22:49.866760 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:22:49.866767 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:22:49.866775 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:22:49.866782 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:22:49.866790 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:22:49.866805 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:22:49.866813 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:22:49.866821 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:22:49.866831 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:22:49.866838 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:22:49.866846 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:22:49.866854 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:22:49.866861 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:22:49.866869 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:22:49.866876 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:22:49.866883 kernel: landlock: Up and running. Jan 13 21:22:49.866891 kernel: SELinux: Initializing. Jan 13 21:22:49.866901 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:22:49.866908 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:22:49.866916 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:22:49.866924 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:22:49.866931 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:22:49.866939 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:22:49.866947 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:22:49.866954 kernel: ... version: 0 Jan 13 21:22:49.866964 kernel: ... bit width: 48 Jan 13 21:22:49.866972 kernel: ... generic registers: 6 Jan 13 21:22:49.866979 kernel: ... value mask: 0000ffffffffffff Jan 13 21:22:49.866987 kernel: ... max period: 00007fffffffffff Jan 13 21:22:49.866994 kernel: ... fixed-purpose events: 0 Jan 13 21:22:49.867002 kernel: ... event mask: 000000000000003f Jan 13 21:22:49.867009 kernel: signal: max sigframe size: 1776 Jan 13 21:22:49.867016 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:22:49.867024 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:22:49.867032 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:22:49.867042 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:22:49.867049 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:22:49.867056 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:22:49.867064 kernel: smpboot: Max logical packages: 1 Jan 13 21:22:49.867071 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:22:49.867079 kernel: devtmpfs: initialized Jan 13 21:22:49.867086 kernel: x86/mm: Memory block size: 128MB Jan 13 21:22:49.867094 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:22:49.867101 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:22:49.867111 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:22:49.867118 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:22:49.867126 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:22:49.867134 kernel: audit: type=2000 audit(1736803369.880:1): state=initialized audit_enabled=0 res=1 Jan 13 21:22:49.867142 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:22:49.867151 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:22:49.867159 kernel: cpuidle: using governor menu Jan 13 21:22:49.867169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:22:49.867176 kernel: dca service started, version 1.12.1 Jan 13 21:22:49.867186 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:22:49.867194 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:22:49.867201 kernel: PCI: Using configuration type 1 for base access Jan 13 21:22:49.867209 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:22:49.867216 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:22:49.867224 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:22:49.867231 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:22:49.867239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:22:49.867246 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:22:49.867256 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:22:49.867264 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:22:49.867271 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:22:49.867279 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:22:49.867286 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:22:49.867293 kernel: ACPI: Interpreter enabled Jan 13 21:22:49.867301 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:22:49.867308 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:22:49.867316 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:22:49.867326 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:22:49.867333 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:22:49.867341 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:22:49.867517 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:22:49.867681 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:22:49.867811 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:22:49.867822 kernel: PCI host bridge to bus 0000:00 Jan 13 21:22:49.867952 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:22:49.868063 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:22:49.868173 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:22:49.868281 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:22:49.868389 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:22:49.868498 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:22:49.868634 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:22:49.868781 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:22:49.868922 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:22:49.869043 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:22:49.869161 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:22:49.869280 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:22:49.869398 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:22:49.869547 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:22:49.869671 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:22:49.869790 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:22:49.869919 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:22:49.870047 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:22:49.870167 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:22:49.870286 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:22:49.870410 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:22:49.870592 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:22:49.870775 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:22:49.870905 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:22:49.871026 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:22:49.871144 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:22:49.871271 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:22:49.871395 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:22:49.871521 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:22:49.871670 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:22:49.871789 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:22:49.871932 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:22:49.872138 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:22:49.872149 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:22:49.872160 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:22:49.872168 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:22:49.872176 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:22:49.872183 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:22:49.872191 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:22:49.872198 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:22:49.872205 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:22:49.872213 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:22:49.872220 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:22:49.872230 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:22:49.872238 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:22:49.872245 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:22:49.872253 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:22:49.872260 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:22:49.872268 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:22:49.872275 kernel: iommu: Default domain type: Translated Jan 13 21:22:49.872283 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:22:49.872290 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:22:49.872300 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:22:49.872307 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:22:49.872324 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:22:49.872456 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:22:49.872617 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:22:49.872735 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:22:49.872745 kernel: vgaarb: loaded Jan 13 21:22:49.872753 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:22:49.872764 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:22:49.872772 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:22:49.872779 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:22:49.872787 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:22:49.872801 kernel: pnp: PnP ACPI init Jan 13 21:22:49.872939 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:22:49.872950 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:22:49.872958 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:22:49.872969 kernel: NET: Registered PF_INET protocol family Jan 13 21:22:49.872977 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:22:49.872985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:22:49.872992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:22:49.873000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:22:49.873007 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:22:49.873015 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:22:49.873023 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:22:49.873030 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:22:49.873040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:22:49.873048 kernel: NET: Registered PF_XDP protocol family Jan 13 21:22:49.873157 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:22:49.873264 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:22:49.873371 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:22:49.873478 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:22:49.873600 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:22:49.873707 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:22:49.873721 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:22:49.873729 kernel: Initialise system trusted keyrings Jan 13 21:22:49.873736 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:22:49.873754 kernel: Key type asymmetric registered Jan 13 21:22:49.873777 kernel: Asymmetric key parser 'x509' registered Jan 13 21:22:49.873792 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:22:49.873815 kernel: io scheduler mq-deadline registered Jan 13 21:22:49.873824 kernel: io scheduler kyber registered Jan 13 21:22:49.873832 kernel: io scheduler bfq registered Jan 13 21:22:49.873842 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:22:49.873850 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:22:49.873858 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:22:49.873865 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:22:49.873873 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:22:49.873881 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:22:49.873888 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:22:49.873896 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:22:49.873903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:22:49.874035 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:22:49.874046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:22:49.874162 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:22:49.874277 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:22:49 UTC (1736803369) Jan 13 21:22:49.874388 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:22:49.874398 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:22:49.874405 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:22:49.874413 kernel: Segment Routing with IPv6 Jan 13 21:22:49.874424 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:22:49.874432 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:22:49.874439 kernel: Key type dns_resolver registered Jan 13 21:22:49.874447 kernel: IPI shorthand broadcast: enabled Jan 13 21:22:49.874454 kernel: sched_clock: Marking stable (587002897, 125193836)->(727146139, -14949406) Jan 13 21:22:49.874462 kernel: registered taskstats version 1 Jan 13 21:22:49.874470 kernel: Loading compiled-in X.509 certificates Jan 13 21:22:49.874477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:22:49.874485 kernel: Key type .fscrypt registered Jan 13 21:22:49.874495 kernel: Key type fscrypt-provisioning registered Jan 13 21:22:49.874502 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:22:49.874510 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:22:49.874517 kernel: ima: No architecture policies found Jan 13 21:22:49.874588 kernel: clk: Disabling unused clocks Jan 13 21:22:49.874597 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:22:49.874604 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:22:49.874612 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:22:49.874619 kernel: Run /init as init process Jan 13 21:22:49.874630 kernel: with arguments: Jan 13 21:22:49.874637 kernel: /init Jan 13 21:22:49.874645 kernel: with environment: Jan 13 21:22:49.874652 kernel: HOME=/ Jan 13 21:22:49.874659 kernel: TERM=linux Jan 13 21:22:49.874666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:22:49.874676 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:22:49.874686 systemd[1]: Detected virtualization kvm. Jan 13 21:22:49.874697 systemd[1]: Detected architecture x86-64. Jan 13 21:22:49.874705 systemd[1]: Running in initrd. Jan 13 21:22:49.874713 systemd[1]: No hostname configured, using default hostname. Jan 13 21:22:49.874720 systemd[1]: Hostname set to . Jan 13 21:22:49.874729 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:22:49.874737 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:22:49.874745 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:49.874753 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:49.874764 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:22:49.874784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:22:49.874802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:22:49.874811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:22:49.874829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:22:49.874849 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:22:49.874859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:49.874875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:49.874883 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:22:49.874891 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:22:49.874900 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:22:49.874908 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:22:49.874916 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:49.874927 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:49.874940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:22:49.874948 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:22:49.874957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:49.874965 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:49.874976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:49.874984 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:22:49.874992 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:22:49.875003 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:22:49.875012 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:22:49.875020 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:22:49.875028 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:22:49.875037 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:22:49.875045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:49.875053 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:49.875062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:49.875070 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:22:49.875101 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 21:22:49.875122 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:22:49.875133 systemd-journald[193]: Journal started Jan 13 21:22:49.875154 systemd-journald[193]: Runtime Journal (/run/log/journal/932983e8501c4c89b6819f933d7e2ace) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:22:49.872378 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:22:49.903712 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:22:49.902425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:49.908544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:22:49.911086 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:22:49.912150 kernel: Bridge firewalling registered Jan 13 21:22:49.914779 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:49.918325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:22:49.921326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:49.924145 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:22:49.930743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:49.933442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:22:49.936239 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:49.939130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:49.942772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:49.946299 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:22:49.948691 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:22:49.949331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:49.965023 dracut-cmdline[226]: dracut-dracut-053 Jan 13 21:22:49.967728 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:22:49.996336 systemd-resolved[227]: Positive Trust Anchors: Jan 13 21:22:49.996355 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:22:49.996387 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:22:50.007323 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 13 21:22:50.009319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:22:50.011460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:50.053553 kernel: SCSI subsystem initialized Jan 13 21:22:50.062582 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:22:50.073548 kernel: iscsi: registered transport (tcp) Jan 13 21:22:50.094611 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:22:50.094634 kernel: QLogic iSCSI HBA Driver Jan 13 21:22:50.142397 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:50.149742 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:22:50.175242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:22:50.175266 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:22:50.175276 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:22:50.216551 kernel: raid6: avx2x4 gen() 30407 MB/s Jan 13 21:22:50.233551 kernel: raid6: avx2x2 gen() 28653 MB/s Jan 13 21:22:50.250632 kernel: raid6: avx2x1 gen() 25747 MB/s Jan 13 21:22:50.250652 kernel: raid6: using algorithm avx2x4 gen() 30407 MB/s Jan 13 21:22:50.268619 kernel: raid6: .... xor() 7556 MB/s, rmw enabled Jan 13 21:22:50.268639 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:22:50.288552 kernel: xor: automatically using best checksumming function avx Jan 13 21:22:50.448557 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:22:50.462326 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:50.475695 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:50.487081 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 13 21:22:50.491615 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:50.498668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:22:50.512342 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jan 13 21:22:50.544967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:50.557680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:22:50.618479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:50.626923 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:22:50.636886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:50.639890 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:50.641137 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:50.644732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:22:50.650636 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:22:50.671052 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:22:50.671215 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:22:50.671228 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:22:50.671240 kernel: GPT:9289727 != 19775487 Jan 13 21:22:50.671259 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:22:50.671270 kernel: GPT:9289727 != 19775487 Jan 13 21:22:50.671281 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:22:50.671293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:22:50.654720 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:22:50.673720 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:50.684030 kernel: libata version 3.00 loaded. Jan 13 21:22:50.678571 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:50.678693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:50.680937 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:50.682451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:50.682686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:50.686782 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:50.697575 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:22:50.697641 kernel: AES CTR mode by8 optimization enabled Jan 13 21:22:50.703893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:50.709027 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (472) Jan 13 21:22:50.709048 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:22:50.725877 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:22:50.725903 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:22:50.726094 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:22:50.726271 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (467) Jan 13 21:22:50.726286 kernel: scsi host0: ahci Jan 13 21:22:50.726483 kernel: scsi host1: ahci Jan 13 21:22:50.726685 kernel: scsi host2: ahci Jan 13 21:22:50.726930 kernel: scsi host3: ahci Jan 13 21:22:50.727112 kernel: scsi host4: ahci Jan 13 21:22:50.727289 kernel: scsi host5: ahci Jan 13 21:22:50.727474 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:22:50.727490 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:22:50.727504 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:22:50.727518 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:22:50.727618 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:22:50.727635 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:22:50.729193 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:22:50.766347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:50.778049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:22:50.785390 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:22:50.795293 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:22:50.798056 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:22:50.811659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:22:50.814857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:22:50.837568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:50.878689 disk-uuid[553]: Primary Header is updated. Jan 13 21:22:50.878689 disk-uuid[553]: Secondary Entries is updated. Jan 13 21:22:50.878689 disk-uuid[553]: Secondary Header is updated. Jan 13 21:22:50.883564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:22:50.887558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:22:51.037615 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:22:51.037707 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:22:51.037719 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:22:51.038549 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:22:51.039547 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:22:51.040552 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:22:51.041555 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:22:51.041567 kernel: ata3.00: applying bridge limits Jan 13 21:22:51.042565 kernel: ata3.00: configured for UDMA/100 Jan 13 21:22:51.044551 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:22:51.088555 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:22:51.102211 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:22:51.102228 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:22:51.888554 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:22:51.888942 disk-uuid[562]: The operation has completed successfully. Jan 13 21:22:51.918942 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:22:51.919072 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:22:51.945702 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:22:51.951718 sh[591]: Success Jan 13 21:22:51.963569 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:22:51.996943 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:22:52.012869 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:22:52.016244 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:22:52.026053 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:22:52.026085 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:52.026096 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:22:52.027070 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:22:52.027814 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:22:52.032297 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:22:52.033258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:22:52.036192 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:22:52.038768 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:22:52.055992 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:52.056029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:52.056042 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:22:52.059559 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:22:52.069190 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:22:52.071449 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:52.079844 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:22:52.088768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:22:52.141064 ignition[695]: Ignition 2.19.0 Jan 13 21:22:52.141077 ignition[695]: Stage: fetch-offline Jan 13 21:22:52.141115 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:52.141126 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:52.141218 ignition[695]: parsed url from cmdline: "" Jan 13 21:22:52.141223 ignition[695]: no config URL provided Jan 13 21:22:52.141228 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:22:52.141240 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:22:52.141273 ignition[695]: op(1): [started] loading QEMU firmware config module Jan 13 21:22:52.141279 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:22:52.151051 ignition[695]: op(1): [finished] loading QEMU firmware config module Jan 13 21:22:52.154858 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:52.167722 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:22:52.193632 systemd-networkd[780]: lo: Link UP Jan 13 21:22:52.193643 systemd-networkd[780]: lo: Gained carrier Jan 13 21:22:52.195266 systemd-networkd[780]: Enumeration completed Jan 13 21:22:52.195390 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:22:52.196619 ignition[695]: parsing config with SHA512: bdf76afb413dad0fa8e12414fe94e60daa8874f1607dc8f6ec86af73fb2b5f575a133362358a1e81542bfe264ce18b7c9a41125d5199ee6a0bffee662b4d0934 Jan 13 21:22:52.195670 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:52.200496 ignition[695]: fetch-offline: fetch-offline passed Jan 13 21:22:52.195674 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:22:52.200581 ignition[695]: Ignition finished successfully Jan 13 21:22:52.196596 systemd-networkd[780]: eth0: Link UP Jan 13 21:22:52.196600 systemd-networkd[780]: eth0: Gained carrier Jan 13 21:22:52.196606 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:52.197396 systemd[1]: Reached target network.target - Network. Jan 13 21:22:52.200135 unknown[695]: fetched base config from "system" Jan 13 21:22:52.200143 unknown[695]: fetched user config from "qemu" Jan 13 21:22:52.202505 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:52.204604 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:22:52.208811 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:22:52.211361 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:22:52.222064 ignition[784]: Ignition 2.19.0 Jan 13 21:22:52.222076 ignition[784]: Stage: kargs Jan 13 21:22:52.222229 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:52.222240 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:52.223016 ignition[784]: kargs: kargs passed Jan 13 21:22:52.223059 ignition[784]: Ignition finished successfully Jan 13 21:22:52.226904 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:22:52.241649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:22:52.253200 ignition[794]: Ignition 2.19.0 Jan 13 21:22:52.253217 ignition[794]: Stage: disks Jan 13 21:22:52.253381 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:52.253393 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:52.257168 ignition[794]: disks: disks passed Jan 13 21:22:52.257218 ignition[794]: Ignition finished successfully Jan 13 21:22:52.260581 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:22:52.261843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:52.263739 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:22:52.264986 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:22:52.265385 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:22:52.265715 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:22:52.280656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:22:52.293041 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:22:52.299505 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:22:52.308611 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:22:52.396558 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:22:52.397346 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:22:52.398895 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:22:52.407592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:22:52.409275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:22:52.410739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:22:52.417273 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 13 21:22:52.417296 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:52.417308 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:52.417318 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:22:52.410776 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:22:52.410796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:52.423553 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:22:52.424719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:22:52.428293 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:22:52.430023 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:22:52.466547 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:22:52.470811 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:22:52.475064 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:22:52.480200 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:22:52.569424 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:52.581628 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:22:52.584161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:22:52.591544 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:52.609007 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:22:52.614197 ignition[927]: INFO : Ignition 2.19.0 Jan 13 21:22:52.614197 ignition[927]: INFO : Stage: mount Jan 13 21:22:52.615970 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:52.615970 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:52.615970 ignition[927]: INFO : mount: mount passed Jan 13 21:22:52.615970 ignition[927]: INFO : Ignition finished successfully Jan 13 21:22:52.622063 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:22:52.638709 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:22:53.025494 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:22:53.038684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:22:53.044549 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 13 21:22:53.046684 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:22:53.046719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:22:53.046730 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:22:53.049549 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:22:53.051221 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:22:53.072588 ignition[956]: INFO : Ignition 2.19.0 Jan 13 21:22:53.072588 ignition[956]: INFO : Stage: files Jan 13 21:22:53.074203 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:53.074203 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:53.076939 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:22:53.078752 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:22:53.078752 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:22:53.083267 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:22:53.084739 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:22:53.086391 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 21:22:53.087512 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:22:53.089696 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:22:53.091586 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:22:53.133520 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:22:53.227188 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:22:53.229206 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:22:53.230948 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:22:53.700940 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:22:53.800203 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:22:53.800203 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:53.803962 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:22:54.197857 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:22:54.239647 systemd-networkd[780]: eth0: Gained IPv6LL Jan 13 21:22:54.580365 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:22:54.580365 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:22:54.584064 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:22:54.616166 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:22:54.620324 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:22:54.622011 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:22:54.623396 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:54.623396 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:22:54.626163 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:54.627912 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:22:54.629550 ignition[956]: INFO : files: files passed Jan 13 21:22:54.630268 ignition[956]: INFO : Ignition finished successfully Jan 13 21:22:54.633693 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:22:54.646651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:22:54.648372 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:22:54.650265 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:22:54.650388 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:22:54.659771 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:22:54.662492 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:54.662492 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:54.665637 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:22:54.665474 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:54.667019 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:22:54.681721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:22:54.708571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:22:54.708706 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:22:54.710964 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:22:54.713014 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:22:54.715027 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:22:54.723657 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:22:54.739060 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:54.745815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:22:54.754255 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:54.755720 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:54.757930 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:22:54.759985 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:22:54.760104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:22:54.762272 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:22:54.763981 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:22:54.765978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:22:54.768015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:22:54.770015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:22:54.772158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:22:54.774250 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:22:54.776512 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:22:54.778500 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:22:54.780673 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:22:54.782430 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:22:54.782556 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:22:54.784661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:54.786252 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:54.788311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:22:54.788428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:54.790513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:22:54.790631 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:22:54.792835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:22:54.792943 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:22:54.794944 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:22:54.796660 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:22:54.800577 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:54.802242 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:22:54.804200 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:22:54.805972 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:22:54.806067 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:22:54.807975 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:22:54.808066 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:22:54.810426 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:22:54.810551 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:22:54.812466 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:22:54.812590 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:22:54.822686 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:22:54.823628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:22:54.823750 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:54.826727 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:22:54.828825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:22:54.828956 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:54.830900 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:22:54.831001 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:22:54.836458 ignition[1012]: INFO : Ignition 2.19.0 Jan 13 21:22:54.836458 ignition[1012]: INFO : Stage: umount Jan 13 21:22:54.836458 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:22:54.836458 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:22:54.843366 ignition[1012]: INFO : umount: umount passed Jan 13 21:22:54.843366 ignition[1012]: INFO : Ignition finished successfully Jan 13 21:22:54.837632 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:22:54.837748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:22:54.839635 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:22:54.839748 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:22:54.842323 systemd[1]: Stopped target network.target - Network. Jan 13 21:22:54.843405 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:22:54.843468 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:22:54.845212 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:22:54.845259 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:22:54.847064 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:22:54.847108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:22:54.848994 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:22:54.849042 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:22:54.851436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:22:54.853361 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:22:54.856326 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:22:54.858571 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 13 21:22:54.861264 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:22:54.861402 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:22:54.863626 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:22:54.863750 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:22:54.866793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:22:54.866869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:54.877632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:22:54.879685 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:22:54.879780 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:22:54.882030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:22:54.882078 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:54.884372 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:22:54.884421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:54.886355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:22:54.886402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:54.888761 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:54.898647 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:22:54.898787 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:22:54.912407 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:22:54.912615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:54.914759 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:22:54.914806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:54.916806 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:22:54.916843 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:54.918742 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:22:54.918788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:22:54.921034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:22:54.921102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:22:54.922987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:22:54.923035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:22:54.929648 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:22:54.930799 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:22:54.930852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:54.933301 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:22:54.933348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:54.938517 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:22:54.938651 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:22:55.036628 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:22:55.036769 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:22:55.037391 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:22:55.039756 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:22:55.039807 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:22:55.051656 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:22:55.058479 systemd[1]: Switching root. Jan 13 21:22:55.094975 systemd-journald[193]: Journal stopped Jan 13 21:22:56.202802 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 21:22:56.202880 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:22:56.202906 kernel: SELinux: policy capability open_perms=1 Jan 13 21:22:56.202921 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:22:56.202931 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:22:56.202942 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:22:56.202953 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:22:56.202971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:22:56.202981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:22:56.203001 kernel: audit: type=1403 audit(1736803375.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:22:56.203013 systemd[1]: Successfully loaded SELinux policy in 37.958ms. Jan 13 21:22:56.203027 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.992ms. Jan 13 21:22:56.203042 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:22:56.203054 systemd[1]: Detected virtualization kvm. Jan 13 21:22:56.203066 systemd[1]: Detected architecture x86-64. Jan 13 21:22:56.203078 systemd[1]: Detected first boot. Jan 13 21:22:56.203090 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:22:56.203101 zram_generator::config[1057]: No configuration found. Jan 13 21:22:56.203120 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:22:56.203132 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:22:56.203146 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:22:56.203159 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:56.203171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:22:56.203184 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:22:56.203195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:22:56.203207 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:22:56.203219 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:22:56.203231 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:22:56.203243 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:22:56.203257 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:22:56.203270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:22:56.203282 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:22:56.203294 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:22:56.203306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:22:56.203318 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:22:56.203330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:22:56.203342 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:22:56.203353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:22:56.203367 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:22:56.203379 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:22:56.203391 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:22:56.203403 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:22:56.203415 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:22:56.203427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:22:56.203439 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:22:56.203451 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:22:56.203466 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:22:56.203478 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:22:56.203490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:22:56.203502 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:22:56.203514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:22:56.203538 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:22:56.203550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:22:56.203562 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:22:56.203574 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:22:56.203589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:56.203601 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:22:56.203612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:22:56.203624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:22:56.203643 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:22:56.203655 systemd[1]: Reached target machines.target - Containers. Jan 13 21:22:56.203667 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:22:56.203680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:56.203694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:22:56.203706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:22:56.203719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:56.203730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:56.203742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:56.203754 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:22:56.203766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:56.203778 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:22:56.203792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:22:56.203805 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:22:56.203817 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:22:56.203829 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:22:56.203841 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:22:56.203853 kernel: loop: module loaded Jan 13 21:22:56.203864 kernel: fuse: init (API version 7.39) Jan 13 21:22:56.203876 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:22:56.203887 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:22:56.203902 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:22:56.203930 systemd-journald[1134]: Collecting audit messages is disabled. Jan 13 21:22:56.203952 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:22:56.203965 systemd-journald[1134]: Journal started Jan 13 21:22:56.203986 systemd-journald[1134]: Runtime Journal (/run/log/journal/932983e8501c4c89b6819f933d7e2ace) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:22:55.991459 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:22:56.009360 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:22:56.009811 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:22:56.207142 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:22:56.207214 systemd[1]: Stopped verity-setup.service. Jan 13 21:22:56.210665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:56.212553 kernel: ACPI: bus type drm_connector registered Jan 13 21:22:56.212579 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:22:56.214764 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:22:56.216086 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:22:56.217344 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:22:56.218444 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:22:56.219685 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:22:56.220929 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:22:56.222186 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:22:56.223654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:22:56.225242 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:22:56.225416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:22:56.226911 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:56.227084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:56.228668 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:56.228843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:56.230215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:56.230399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:56.232044 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:22:56.232221 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:22:56.233624 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:56.233805 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:56.235337 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:22:56.236760 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:22:56.238280 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:22:56.253523 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:22:56.264613 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:22:56.266917 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:22:56.268107 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:22:56.268138 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:22:56.270114 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:22:56.272427 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:22:56.278442 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:22:56.279848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:56.290671 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:22:56.292798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:22:56.294137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:56.297674 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:22:56.298935 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:56.302430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:56.305090 systemd-journald[1134]: Time spent on flushing to /var/log/journal/932983e8501c4c89b6819f933d7e2ace is 19.540ms for 952 entries. Jan 13 21:22:56.305090 systemd-journald[1134]: System Journal (/var/log/journal/932983e8501c4c89b6819f933d7e2ace) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:22:56.331996 systemd-journald[1134]: Received client request to flush runtime journal. Jan 13 21:22:56.309856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:22:56.315473 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:22:56.318281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:22:56.319823 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:22:56.322338 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:22:56.326019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:22:56.328202 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:22:56.334703 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:22:56.338721 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:22:56.340603 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:22:56.349028 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:22:56.352622 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:22:56.355170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:56.362713 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:22:56.366568 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 21:22:56.377593 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:22:56.385700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:22:56.387861 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:22:56.388571 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:22:56.395543 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:22:56.407339 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 13 21:22:56.407358 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 13 21:22:56.414659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:22:56.429562 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 21:22:56.467563 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:22:56.480590 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:22:56.490557 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 21:22:56.497469 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:22:56.498097 (sd-merge)[1196]: Merged extensions into '/usr'. Jan 13 21:22:56.502293 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:22:56.502309 systemd[1]: Reloading... Jan 13 21:22:56.554581 zram_generator::config[1225]: No configuration found. Jan 13 21:22:56.603611 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:22:56.674692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:56.723368 systemd[1]: Reloading finished in 220 ms. Jan 13 21:22:56.760744 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:22:56.762286 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:22:56.772728 systemd[1]: Starting ensure-sysext.service... Jan 13 21:22:56.774734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:22:56.782314 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:22:56.782329 systemd[1]: Reloading... Jan 13 21:22:56.796658 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:22:56.797012 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:22:56.797997 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:22:56.798285 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 13 21:22:56.798359 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 13 21:22:56.802244 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:56.802257 systemd-tmpfiles[1260]: Skipping /boot Jan 13 21:22:56.815336 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:22:56.815349 systemd-tmpfiles[1260]: Skipping /boot Jan 13 21:22:56.832155 zram_generator::config[1287]: No configuration found. Jan 13 21:22:56.942906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:56.992337 systemd[1]: Reloading finished in 209 ms. Jan 13 21:22:57.012093 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:22:57.025924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:22:57.032761 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:57.035250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:22:57.037511 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:22:57.041773 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:22:57.046306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:22:57.049919 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:22:57.057370 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:22:57.062942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:57.063106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:57.064216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:57.066329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:57.069009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:22:57.070702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:57.070819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:57.076459 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:22:57.080193 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:22:57.082189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:57.082432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:57.087767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:57.088004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:57.088987 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 13 21:22:57.092021 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:22:57.092193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:22:57.092477 augenrules[1354]: No rules Jan 13 21:22:57.093812 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:57.100155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:57.100316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:22:57.105765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:22:57.108210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:22:57.111691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:22:57.113113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:22:57.116482 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:22:57.118667 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:22:57.118956 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:22:57.120622 systemd[1]: Finished ensure-sysext.service. Jan 13 21:22:57.133899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:22:57.134123 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:22:57.135560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:22:57.137260 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:22:57.138874 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:22:57.139053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:22:57.140576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:22:57.140762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:22:57.148742 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:22:57.170320 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:22:57.171485 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:22:57.171587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:22:57.176195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:22:57.179580 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:22:57.182470 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:22:57.189570 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Jan 13 21:22:57.215391 systemd-resolved[1330]: Positive Trust Anchors: Jan 13 21:22:57.215413 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:22:57.215444 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:22:57.219428 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jan 13 21:22:57.221725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:22:57.223709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:22:57.239582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:22:57.247544 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:22:57.260891 systemd-networkd[1394]: lo: Link UP Jan 13 21:22:57.260905 systemd-networkd[1394]: lo: Gained carrier Jan 13 21:22:57.264273 systemd-networkd[1394]: Enumeration completed Jan 13 21:22:57.264692 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:57.264697 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:22:57.265052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:22:57.265662 systemd-networkd[1394]: eth0: Link UP Jan 13 21:22:57.265667 systemd-networkd[1394]: eth0: Gained carrier Jan 13 21:22:57.265678 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:22:57.266660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:22:57.267927 systemd[1]: Reached target network.target - Network. Jan 13 21:22:57.278125 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:22:57.278419 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:22:57.280367 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:22:57.278707 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:22:57.283664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:22:57.285557 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:22:57.285615 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:22:57.301747 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:22:58.292777 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:22:58.292828 systemd-timesyncd[1397]: Initial clock synchronization to Mon 2025-01-13 21:22:58.292658 UTC. Jan 13 21:22:58.293222 systemd-resolved[1330]: Clock change detected. Flushing caches. Jan 13 21:22:58.294650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:22:58.303653 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:22:58.314127 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:22:58.314461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:22:58.402738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:22:58.405533 kernel: kvm_amd: TSC scaling supported Jan 13 21:22:58.405563 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:22:58.405576 kernel: kvm_amd: Nested Paging enabled Jan 13 21:22:58.405588 kernel: kvm_amd: LBR virtualization supported Jan 13 21:22:58.406597 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:22:58.406616 kernel: kvm_amd: Virtual GIF supported Jan 13 21:22:58.426163 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:22:58.455371 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:22:58.466419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:22:58.475000 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:58.515540 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:22:58.517076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:22:58.518238 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:22:58.519421 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:22:58.520687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:22:58.522304 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:22:58.523543 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:22:58.524802 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:22:58.526087 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:22:58.526143 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:22:58.527216 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:22:58.528676 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:22:58.531387 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:22:58.539648 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:22:58.541984 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:22:58.543551 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:22:58.544712 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:22:58.545681 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:22:58.546685 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:58.546711 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:22:58.547729 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:22:58.549819 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:22:58.552191 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:22:58.554204 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:22:58.556309 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:22:58.556803 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:22:58.558348 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:22:58.562133 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:22:58.564850 jq[1430]: false Jan 13 21:22:58.565340 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:22:58.570214 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:22:58.578684 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:22:58.580787 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:22:58.581382 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:22:58.582916 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:22:58.585553 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:22:58.587739 extend-filesystems[1431]: Found loop3 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found loop4 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found loop5 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found sr0 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda1 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda2 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda3 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found usr Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda4 Jan 13 21:22:58.587739 extend-filesystems[1431]: Found vda6 Jan 13 21:22:58.603716 extend-filesystems[1431]: Found vda7 Jan 13 21:22:58.603716 extend-filesystems[1431]: Found vda9 Jan 13 21:22:58.603716 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 13 21:22:58.587773 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:22:58.605155 dbus-daemon[1429]: [system] SELinux support is enabled Jan 13 21:22:58.617074 jq[1441]: true Jan 13 21:22:58.591022 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:22:58.591657 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:22:58.595207 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:22:58.621985 jq[1453]: true Jan 13 21:22:58.595479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:22:58.630398 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 13 21:22:58.607764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:22:58.634246 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:22:58.637447 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:22:58.637490 update_engine[1440]: I20250113 21:22:58.630362 1440 main.cc:92] Flatcar Update Engine starting Jan 13 21:22:58.637490 update_engine[1440]: I20250113 21:22:58.632843 1440 update_check_scheduler.cc:74] Next update check in 9m2s Jan 13 21:22:58.615908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:22:58.615943 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:22:58.617388 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:22:58.617408 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:22:58.622332 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:22:58.622947 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:22:58.625533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:22:58.641235 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:22:58.646005 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:22:58.655891 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:22:58.659670 systemd-logind[1437]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:22:58.660237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1385) Jan 13 21:22:58.659696 systemd-logind[1437]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:22:58.660289 tar[1448]: linux-amd64/helm Jan 13 21:22:58.663634 systemd-logind[1437]: New seat seat0. Jan 13 21:22:58.670833 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:22:58.685143 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:22:58.726461 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:22:58.726461 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:22:58.726461 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:22:58.734833 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 13 21:22:58.735846 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:22:58.729128 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:22:58.730831 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:22:58.731048 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:22:58.733079 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:22:58.738588 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:22:58.806892 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:22:58.831081 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:22:58.839959 containerd[1457]: time="2025-01-13T21:22:58.839871694Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:22:58.841492 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:22:58.847463 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:55778.service - OpenSSH per-connection server daemon (10.0.0.1:55778). Jan 13 21:22:58.854159 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:22:58.854437 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:22:58.866425 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:22:58.868394 containerd[1457]: time="2025-01-13T21:22:58.868331095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.870586 containerd[1457]: time="2025-01-13T21:22:58.870539146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:58.870586 containerd[1457]: time="2025-01-13T21:22:58.870583409Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:22:58.870641 containerd[1457]: time="2025-01-13T21:22:58.870603256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:22:58.870818 containerd[1457]: time="2025-01-13T21:22:58.870797561Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:22:58.870850 containerd[1457]: time="2025-01-13T21:22:58.870817358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.870906 containerd[1457]: time="2025-01-13T21:22:58.870887008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:58.870926 containerd[1457]: time="2025-01-13T21:22:58.870903880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.871141 containerd[1457]: time="2025-01-13T21:22:58.871120055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:58.871187 containerd[1457]: time="2025-01-13T21:22:58.871139883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.871187 containerd[1457]: time="2025-01-13T21:22:58.871183194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:58.871225 containerd[1457]: time="2025-01-13T21:22:58.871193283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.871313 containerd[1457]: time="2025-01-13T21:22:58.871284714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.872055 containerd[1457]: time="2025-01-13T21:22:58.871532559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:22:58.872055 containerd[1457]: time="2025-01-13T21:22:58.871662923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:22:58.872055 containerd[1457]: time="2025-01-13T21:22:58.871675868Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:22:58.872055 containerd[1457]: time="2025-01-13T21:22:58.871772960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:22:58.872055 containerd[1457]: time="2025-01-13T21:22:58.871827252Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:22:58.877030 containerd[1457]: time="2025-01-13T21:22:58.876911957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:22:58.877030 containerd[1457]: time="2025-01-13T21:22:58.876961781Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:22:58.877030 containerd[1457]: time="2025-01-13T21:22:58.876977981Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:22:58.877030 containerd[1457]: time="2025-01-13T21:22:58.876992548Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:22:58.877497 containerd[1457]: time="2025-01-13T21:22:58.877287602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:22:58.877497 containerd[1457]: time="2025-01-13T21:22:58.877427214Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.877886795Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878011078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878025295Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878037438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878050162Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878061503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878073796Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878086139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878098933Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878146 containerd[1457]: time="2025-01-13T21:22:58.878122527Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878425 containerd[1457]: time="2025-01-13T21:22:58.878404316Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878474688Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878499885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878512829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878525603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878548076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878560449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878574826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878585977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878598400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878611174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878624759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878637012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878648885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878662500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.878777 containerd[1457]: time="2025-01-13T21:22:58.878682017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:22:58.879075 containerd[1457]: time="2025-01-13T21:22:58.878701864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.879075 containerd[1457]: time="2025-01-13T21:22:58.878713897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.879075 containerd[1457]: time="2025-01-13T21:22:58.878726300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:22:58.878903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879287402Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879401105Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879418308Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879436031Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879450277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879465987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879484021Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:22:58.880344 containerd[1457]: time="2025-01-13T21:22:58.879497576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:22:58.880673 containerd[1457]: time="2025-01-13T21:22:58.879844717Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:22:58.880673 containerd[1457]: time="2025-01-13T21:22:58.879897015Z" level=info msg="Connect containerd service" Jan 13 21:22:58.880673 containerd[1457]: time="2025-01-13T21:22:58.879929196Z" level=info msg="using legacy CRI server" Jan 13 21:22:58.880673 containerd[1457]: time="2025-01-13T21:22:58.879935437Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:22:58.880673 containerd[1457]: time="2025-01-13T21:22:58.880028772Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:22:58.881419 containerd[1457]: time="2025-01-13T21:22:58.881393472Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:22:58.881943 containerd[1457]: time="2025-01-13T21:22:58.881660863Z" level=info msg="Start subscribing containerd event" Jan 13 21:22:58.882740 containerd[1457]: time="2025-01-13T21:22:58.882699601Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:22:58.882813 containerd[1457]: time="2025-01-13T21:22:58.882783208Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:22:58.882887 containerd[1457]: time="2025-01-13T21:22:58.882868999Z" level=info msg="Start recovering state" Jan 13 21:22:58.883001 containerd[1457]: time="2025-01-13T21:22:58.882987711Z" level=info msg="Start event monitor" Jan 13 21:22:58.883072 containerd[1457]: time="2025-01-13T21:22:58.883054938Z" level=info msg="Start snapshots syncer" Jan 13 21:22:58.883131 containerd[1457]: time="2025-01-13T21:22:58.883119158Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:22:58.883184 containerd[1457]: time="2025-01-13T21:22:58.883173410Z" level=info msg="Start streaming server" Jan 13 21:22:58.883290 containerd[1457]: time="2025-01-13T21:22:58.883276463Z" level=info msg="containerd successfully booted in 0.044558s" Jan 13 21:22:58.890458 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:22:58.892883 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:22:58.894311 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:22:58.895568 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:22:58.906397 sshd[1508]: Accepted publickey for core from 10.0.0.1 port 55778 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:58.908578 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:58.916417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:22:58.926378 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:22:58.929565 systemd-logind[1437]: New session 1 of user core. Jan 13 21:22:58.938284 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:22:58.949393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:22:58.953067 (systemd)[1523]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:22:59.051628 systemd[1523]: Queued start job for default target default.target. Jan 13 21:22:59.063356 systemd[1523]: Created slice app.slice - User Application Slice. Jan 13 21:22:59.063382 systemd[1523]: Reached target paths.target - Paths. Jan 13 21:22:59.063396 systemd[1523]: Reached target timers.target - Timers. Jan 13 21:22:59.064845 systemd[1523]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:22:59.077809 systemd[1523]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:22:59.077935 systemd[1523]: Reached target sockets.target - Sockets. Jan 13 21:22:59.077956 systemd[1523]: Reached target basic.target - Basic System. Jan 13 21:22:59.077991 systemd[1523]: Reached target default.target - Main User Target. Jan 13 21:22:59.078022 systemd[1523]: Startup finished in 118ms. Jan 13 21:22:59.078649 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:22:59.080825 tar[1448]: linux-amd64/LICENSE Jan 13 21:22:59.080902 tar[1448]: linux-amd64/README.md Jan 13 21:22:59.089237 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:22:59.101478 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:22:59.155672 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Jan 13 21:22:59.191779 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:59.193277 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:59.197230 systemd-logind[1437]: New session 2 of user core. Jan 13 21:22:59.212239 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:22:59.266921 sshd[1537]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:59.279696 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:55780.service: Deactivated successfully. Jan 13 21:22:59.281488 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:22:59.282782 systemd-logind[1437]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:22:59.284124 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:55788.service - OpenSSH per-connection server daemon (10.0.0.1:55788). Jan 13 21:22:59.286201 systemd-logind[1437]: Removed session 2. Jan 13 21:22:59.315934 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 55788 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:22:59.317805 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:59.321478 systemd-logind[1437]: New session 3 of user core. Jan 13 21:22:59.332234 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:22:59.386306 sshd[1544]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:59.389970 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:55788.service: Deactivated successfully. Jan 13 21:22:59.391773 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:22:59.392402 systemd-logind[1437]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:22:59.393181 systemd-logind[1437]: Removed session 3. Jan 13 21:23:00.220317 systemd-networkd[1394]: eth0: Gained IPv6LL Jan 13 21:23:00.223278 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:23:00.225098 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:23:00.242515 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:23:00.245316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:00.247501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:23:00.268089 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:23:00.268404 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:23:00.270290 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:23:00.272671 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:23:00.843640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:00.845313 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:23:00.846618 systemd[1]: Startup finished in 716ms (kernel) + 5.819s (initrd) + 4.386s (userspace) = 10.922s. Jan 13 21:23:00.858831 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:23:01.347367 kubelet[1572]: E0113 21:23:01.347276 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:23:01.351953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:23:01.352183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:23:09.400741 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:55116.service - OpenSSH per-connection server daemon (10.0.0.1:55116). Jan 13 21:23:09.431467 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 55116 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:09.432898 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:09.436671 systemd-logind[1437]: New session 4 of user core. Jan 13 21:23:09.446232 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:23:09.498288 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:09.507459 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:55116.service: Deactivated successfully. Jan 13 21:23:09.509003 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:23:09.510363 systemd-logind[1437]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:23:09.521414 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:55130.service - OpenSSH per-connection server daemon (10.0.0.1:55130). Jan 13 21:23:09.522317 systemd-logind[1437]: Removed session 4. Jan 13 21:23:09.547474 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 55130 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:09.548872 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:09.552371 systemd-logind[1437]: New session 5 of user core. Jan 13 21:23:09.569237 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:23:09.617186 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:09.628476 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:55130.service: Deactivated successfully. Jan 13 21:23:09.629894 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:23:09.631284 systemd-logind[1437]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:23:09.632426 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). Jan 13 21:23:09.633213 systemd-logind[1437]: Removed session 5. Jan 13 21:23:09.663230 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:09.664691 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:09.668028 systemd-logind[1437]: New session 6 of user core. Jan 13 21:23:09.677214 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:23:09.729280 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:09.739760 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:55146.service: Deactivated successfully. Jan 13 21:23:09.741452 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:23:09.742669 systemd-logind[1437]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:23:09.743891 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:55150.service - OpenSSH per-connection server daemon (10.0.0.1:55150). Jan 13 21:23:09.744524 systemd-logind[1437]: Removed session 6. Jan 13 21:23:09.775540 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 55150 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:09.777049 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:09.780344 systemd-logind[1437]: New session 7 of user core. Jan 13 21:23:09.790222 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:23:09.845916 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:23:09.846264 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:23:09.866925 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:09.868684 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:09.880792 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:55150.service: Deactivated successfully. Jan 13 21:23:09.882459 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:23:09.884191 systemd-logind[1437]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:23:09.885517 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:55160.service - OpenSSH per-connection server daemon (10.0.0.1:55160). Jan 13 21:23:09.886410 systemd-logind[1437]: Removed session 7. Jan 13 21:23:09.917054 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:09.918438 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:09.922004 systemd-logind[1437]: New session 8 of user core. Jan 13 21:23:09.931226 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:23:09.983012 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:23:09.983367 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:23:09.986737 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:09.992863 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:23:09.993283 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:23:10.008310 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:23:10.009863 auditctl[1623]: No rules Jan 13 21:23:10.011103 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:23:10.011373 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:23:10.012994 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:23:10.041295 augenrules[1641]: No rules Jan 13 21:23:10.043213 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:23:10.044388 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:10.046014 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:10.059785 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:55160.service: Deactivated successfully. Jan 13 21:23:10.061425 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:23:10.062695 systemd-logind[1437]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:23:10.063861 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:55168.service - OpenSSH per-connection server daemon (10.0.0.1:55168). Jan 13 21:23:10.064533 systemd-logind[1437]: Removed session 8. Jan 13 21:23:10.110547 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 55168 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:23:10.111987 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:10.115209 systemd-logind[1437]: New session 9 of user core. Jan 13 21:23:10.125216 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:23:10.176706 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:23:10.177049 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:23:10.450333 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:23:10.450473 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:23:10.917674 dockerd[1669]: time="2025-01-13T21:23:10.917533571Z" level=info msg="Starting up" Jan 13 21:23:11.016178 dockerd[1669]: time="2025-01-13T21:23:11.016130505Z" level=info msg="Loading containers: start." Jan 13 21:23:11.114144 kernel: Initializing XFRM netlink socket Jan 13 21:23:11.188823 systemd-networkd[1394]: docker0: Link UP Jan 13 21:23:11.209558 dockerd[1669]: time="2025-01-13T21:23:11.209516623Z" level=info msg="Loading containers: done." Jan 13 21:23:11.225527 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck617976529-merged.mount: Deactivated successfully. Jan 13 21:23:11.227652 dockerd[1669]: time="2025-01-13T21:23:11.227607530Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:23:11.227725 dockerd[1669]: time="2025-01-13T21:23:11.227701687Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:23:11.227862 dockerd[1669]: time="2025-01-13T21:23:11.227823806Z" level=info msg="Daemon has completed initialization" Jan 13 21:23:11.265034 dockerd[1669]: time="2025-01-13T21:23:11.264943809Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:23:11.265405 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:23:11.544177 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:23:11.554308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:11.704706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:11.710026 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:23:11.760902 kubelet[1823]: E0113 21:23:11.760836 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:23:11.768996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:23:11.769273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:23:11.998938 containerd[1457]: time="2025-01-13T21:23:11.998748565Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:23:12.826232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950499132.mount: Deactivated successfully. Jan 13 21:23:13.898126 containerd[1457]: time="2025-01-13T21:23:13.898047178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:13.898731 containerd[1457]: time="2025-01-13T21:23:13.898679263Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:23:13.899862 containerd[1457]: time="2025-01-13T21:23:13.899834279Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:13.902359 containerd[1457]: time="2025-01-13T21:23:13.902324309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:13.903386 containerd[1457]: time="2025-01-13T21:23:13.903363097Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.904573765s" Jan 13 21:23:13.903422 containerd[1457]: time="2025-01-13T21:23:13.903389927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:23:13.925993 containerd[1457]: time="2025-01-13T21:23:13.925967398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:23:15.888406 containerd[1457]: time="2025-01-13T21:23:15.888350266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:15.889288 containerd[1457]: time="2025-01-13T21:23:15.889253610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:23:15.890594 containerd[1457]: time="2025-01-13T21:23:15.890569869Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:15.893498 containerd[1457]: time="2025-01-13T21:23:15.893456051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:15.894554 containerd[1457]: time="2025-01-13T21:23:15.894513054Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.968419509s" Jan 13 21:23:15.894554 containerd[1457]: time="2025-01-13T21:23:15.894543952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:23:15.915686 containerd[1457]: time="2025-01-13T21:23:15.915648921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:23:16.885185 containerd[1457]: time="2025-01-13T21:23:16.885131156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.885915 containerd[1457]: time="2025-01-13T21:23:16.885869110Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:23:16.887171 containerd[1457]: time="2025-01-13T21:23:16.887142378Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.890030 containerd[1457]: time="2025-01-13T21:23:16.890004125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:16.891005 containerd[1457]: time="2025-01-13T21:23:16.890952894Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 975.25948ms" Jan 13 21:23:16.891005 containerd[1457]: time="2025-01-13T21:23:16.890985996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:23:16.912919 containerd[1457]: time="2025-01-13T21:23:16.912878523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:23:17.927407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946549320.mount: Deactivated successfully. Jan 13 21:23:18.664856 containerd[1457]: time="2025-01-13T21:23:18.664770085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:18.665523 containerd[1457]: time="2025-01-13T21:23:18.665484184Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:23:18.666780 containerd[1457]: time="2025-01-13T21:23:18.666738306Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:18.668766 containerd[1457]: time="2025-01-13T21:23:18.668713921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:18.669319 containerd[1457]: time="2025-01-13T21:23:18.669287997Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.756345544s" Jan 13 21:23:18.669354 containerd[1457]: time="2025-01-13T21:23:18.669317372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:23:18.692995 containerd[1457]: time="2025-01-13T21:23:18.692947477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:23:19.241924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524626509.mount: Deactivated successfully. Jan 13 21:23:19.860803 containerd[1457]: time="2025-01-13T21:23:19.860746968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:19.861494 containerd[1457]: time="2025-01-13T21:23:19.861431402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:23:19.862696 containerd[1457]: time="2025-01-13T21:23:19.862666358Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:19.865450 containerd[1457]: time="2025-01-13T21:23:19.865421975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:19.866553 containerd[1457]: time="2025-01-13T21:23:19.866509715Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.173524948s" Jan 13 21:23:19.866553 containerd[1457]: time="2025-01-13T21:23:19.866549610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:23:19.889817 containerd[1457]: time="2025-01-13T21:23:19.889772531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:23:20.369252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810270961.mount: Deactivated successfully. Jan 13 21:23:20.375394 containerd[1457]: time="2025-01-13T21:23:20.375344192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:20.376043 containerd[1457]: time="2025-01-13T21:23:20.376001985Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:23:20.377203 containerd[1457]: time="2025-01-13T21:23:20.377162972Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:20.379466 containerd[1457]: time="2025-01-13T21:23:20.379427319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:20.380124 containerd[1457]: time="2025-01-13T21:23:20.380071617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 490.26398ms" Jan 13 21:23:20.380178 containerd[1457]: time="2025-01-13T21:23:20.380118675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:23:20.400325 containerd[1457]: time="2025-01-13T21:23:20.400274635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:23:20.986800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546986550.mount: Deactivated successfully. Jan 13 21:23:22.019491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:23:22.029254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:22.163164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:22.168585 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:23:22.354735 kubelet[2046]: E0113 21:23:22.354592 2046 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:23:22.359975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:23:22.360286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:23:23.588864 containerd[1457]: time="2025-01-13T21:23:23.588781524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:23.600164 containerd[1457]: time="2025-01-13T21:23:23.600094199Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:23:23.601442 containerd[1457]: time="2025-01-13T21:23:23.601401861Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:23.604382 containerd[1457]: time="2025-01-13T21:23:23.604330924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:23.605522 containerd[1457]: time="2025-01-13T21:23:23.605475040Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.20516586s" Jan 13 21:23:23.605575 containerd[1457]: time="2025-01-13T21:23:23.605523401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:23:25.787329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:25.802303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:25.818752 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit session-9.scope)... Jan 13 21:23:25.818768 systemd[1]: Reloading... Jan 13 21:23:25.895235 zram_generator::config[2183]: No configuration found. Jan 13 21:23:26.136331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:23:26.211758 systemd[1]: Reloading finished in 392 ms. Jan 13 21:23:26.261520 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:23:26.261612 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:23:26.261869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:26.264674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:26.405323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:26.410947 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:23:26.454667 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:26.454667 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:23:26.454667 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:26.455039 kubelet[2229]: I0113 21:23:26.454713 2229 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:23:26.916999 kubelet[2229]: I0113 21:23:26.916959 2229 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:23:26.916999 kubelet[2229]: I0113 21:23:26.916991 2229 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:23:26.917263 kubelet[2229]: I0113 21:23:26.917241 2229 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:23:26.933407 kubelet[2229]: E0113 21:23:26.933371 2229 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.935884 kubelet[2229]: I0113 21:23:26.935857 2229 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:26.948495 kubelet[2229]: I0113 21:23:26.948462 2229 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:23:26.948751 kubelet[2229]: I0113 21:23:26.948725 2229 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:23:26.948922 kubelet[2229]: I0113 21:23:26.948896 2229 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:23:26.949014 kubelet[2229]: I0113 21:23:26.948924 2229 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:23:26.949014 kubelet[2229]: I0113 21:23:26.948934 2229 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:23:26.949140 kubelet[2229]: I0113 21:23:26.949052 2229 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:26.949174 kubelet[2229]: I0113 21:23:26.949165 2229 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:23:26.949200 kubelet[2229]: I0113 21:23:26.949189 2229 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:23:26.949241 kubelet[2229]: I0113 21:23:26.949220 2229 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:23:26.949241 kubelet[2229]: I0113 21:23:26.949237 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:23:26.949732 kubelet[2229]: W0113 21:23:26.949625 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.949732 kubelet[2229]: E0113 21:23:26.949662 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.949732 kubelet[2229]: W0113 21:23:26.949681 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.949732 kubelet[2229]: E0113 21:23:26.949714 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.950235 kubelet[2229]: I0113 21:23:26.950220 2229 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:23:26.952628 kubelet[2229]: I0113 21:23:26.952598 2229 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:23:26.952755 kubelet[2229]: W0113 21:23:26.952662 2229 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:23:26.953403 kubelet[2229]: I0113 21:23:26.953244 2229 server.go:1256] "Started kubelet" Jan 13 21:23:26.954589 kubelet[2229]: I0113 21:23:26.954327 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:23:26.956397 kubelet[2229]: I0113 21:23:26.955777 2229 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:23:26.956397 kubelet[2229]: I0113 21:23:26.955847 2229 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:23:26.956397 kubelet[2229]: I0113 21:23:26.955924 2229 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:23:26.956397 kubelet[2229]: I0113 21:23:26.955973 2229 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:23:26.956397 kubelet[2229]: W0113 21:23:26.956243 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.956397 kubelet[2229]: E0113 21:23:26.956278 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.956809 kubelet[2229]: I0113 21:23:26.956534 2229 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:23:26.957553 kubelet[2229]: E0113 21:23:26.957213 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Jan 13 21:23:26.957553 kubelet[2229]: I0113 21:23:26.957383 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:23:26.957609 kubelet[2229]: I0113 21:23:26.957595 2229 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:23:26.958413 kubelet[2229]: I0113 21:23:26.958176 2229 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:23:26.958413 kubelet[2229]: I0113 21:23:26.958239 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:23:26.959060 kubelet[2229]: E0113 21:23:26.959035 2229 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:23:26.960133 kubelet[2229]: I0113 21:23:26.959415 2229 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:23:26.960133 kubelet[2229]: E0113 21:23:26.959583 2229 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d7f24228ff3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:23:26.953222131 +0000 UTC m=+0.537360554,LastTimestamp:2025-01-13 21:23:26.953222131 +0000 UTC m=+0.537360554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:23:26.972139 kubelet[2229]: I0113 21:23:26.972024 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:23:26.973280 kubelet[2229]: I0113 21:23:26.973266 2229 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:23:26.973369 kubelet[2229]: I0113 21:23:26.973358 2229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:23:26.973424 kubelet[2229]: I0113 21:23:26.973415 2229 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:26.973551 kubelet[2229]: I0113 21:23:26.973306 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:23:26.974100 kubelet[2229]: I0113 21:23:26.973603 2229 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:23:26.974100 kubelet[2229]: I0113 21:23:26.973622 2229 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:23:26.974100 kubelet[2229]: E0113 21:23:26.973657 2229 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:23:26.974100 kubelet[2229]: W0113 21:23:26.974038 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:26.974100 kubelet[2229]: E0113 21:23:26.974077 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:27.057901 kubelet[2229]: I0113 21:23:27.057845 2229 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:27.058192 kubelet[2229]: E0113 21:23:27.058176 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 21:23:27.074521 kubelet[2229]: E0113 21:23:27.074476 2229 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:23:27.158223 kubelet[2229]: E0113 21:23:27.158192 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Jan 13 21:23:27.259887 kubelet[2229]: I0113 21:23:27.259745 2229 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:27.260264 kubelet[2229]: E0113 21:23:27.260228 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 21:23:27.275345 kubelet[2229]: E0113 21:23:27.275280 2229 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:23:27.290145 kubelet[2229]: I0113 21:23:27.290064 2229 policy_none.go:49] "None policy: Start" Jan 13 21:23:27.291248 kubelet[2229]: I0113 21:23:27.291216 2229 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:23:27.291329 kubelet[2229]: I0113 21:23:27.291257 2229 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:23:27.305179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:23:27.320197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:23:27.323810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:23:27.334077 kubelet[2229]: I0113 21:23:27.334033 2229 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:23:27.334452 kubelet[2229]: I0113 21:23:27.334357 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:23:27.335716 kubelet[2229]: E0113 21:23:27.335698 2229 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:23:27.559633 kubelet[2229]: E0113 21:23:27.559492 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Jan 13 21:23:27.662285 kubelet[2229]: I0113 21:23:27.662232 2229 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:27.662614 kubelet[2229]: E0113 21:23:27.662587 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 21:23:27.675794 kubelet[2229]: I0113 21:23:27.675743 2229 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:23:27.677069 kubelet[2229]: I0113 21:23:27.677027 2229 topology_manager.go:215] "Topology Admit Handler" podUID="0a5735156e48dfe5696060f921ec2919" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:23:27.678261 kubelet[2229]: I0113 21:23:27.678228 2229 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:23:27.683711 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 21:23:27.704002 systemd[1]: Created slice kubepods-burstable-pod0a5735156e48dfe5696060f921ec2919.slice - libcontainer container kubepods-burstable-pod0a5735156e48dfe5696060f921ec2919.slice. Jan 13 21:23:27.718650 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 21:23:27.760704 kubelet[2229]: I0113 21:23:27.760644 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:27.760856 kubelet[2229]: I0113 21:23:27.760726 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:23:27.760856 kubelet[2229]: I0113 21:23:27.760750 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:27.760856 kubelet[2229]: I0113 21:23:27.760771 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:27.760856 kubelet[2229]: I0113 21:23:27.760789 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:27.760856 kubelet[2229]: I0113 21:23:27.760810 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:27.761012 kubelet[2229]: I0113 21:23:27.760831 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:27.761012 kubelet[2229]: I0113 21:23:27.760891 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:27.761012 kubelet[2229]: I0113 21:23:27.760969 2229 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:27.807282 kubelet[2229]: W0113 21:23:27.807226 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:27.807282 kubelet[2229]: E0113 21:23:27.807275 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:27.915365 kubelet[2229]: W0113 21:23:27.915204 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:27.915365 kubelet[2229]: E0113 21:23:27.915279 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:28.000949 kubelet[2229]: E0113 21:23:28.000887 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.001645 containerd[1457]: time="2025-01-13T21:23:28.001600985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:28.017953 kubelet[2229]: E0113 21:23:28.017908 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.018541 containerd[1457]: time="2025-01-13T21:23:28.018486882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a5735156e48dfe5696060f921ec2919,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:28.020813 kubelet[2229]: E0113 21:23:28.020767 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.021185 containerd[1457]: time="2025-01-13T21:23:28.021152171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:28.360520 kubelet[2229]: E0113 21:23:28.360370 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Jan 13 21:23:28.430132 kubelet[2229]: W0113 21:23:28.430045 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:28.430230 kubelet[2229]: E0113 21:23:28.430139 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:28.464676 kubelet[2229]: I0113 21:23:28.464633 2229 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:28.464999 kubelet[2229]: E0113 21:23:28.464973 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 21:23:28.547650 kubelet[2229]: W0113 21:23:28.547572 2229 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:28.547650 kubelet[2229]: E0113 21:23:28.547644 2229 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 21:23:28.608884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142714058.mount: Deactivated successfully. Jan 13 21:23:28.616120 containerd[1457]: time="2025-01-13T21:23:28.615958695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:28.617118 containerd[1457]: time="2025-01-13T21:23:28.617069929Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:28.618004 containerd[1457]: time="2025-01-13T21:23:28.617953707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:23:28.619017 containerd[1457]: time="2025-01-13T21:23:28.618970373Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:28.619840 containerd[1457]: time="2025-01-13T21:23:28.619812733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:23:28.620801 containerd[1457]: time="2025-01-13T21:23:28.620732979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:23:28.621775 containerd[1457]: time="2025-01-13T21:23:28.621725901Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:28.626371 containerd[1457]: time="2025-01-13T21:23:28.626315388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:23:28.627298 containerd[1457]: time="2025-01-13T21:23:28.627255932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.564476ms" Jan 13 21:23:28.628012 containerd[1457]: time="2025-01-13T21:23:28.627972816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.746255ms" Jan 13 21:23:28.630532 containerd[1457]: time="2025-01-13T21:23:28.630492822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.924166ms" Jan 13 21:23:28.787641 containerd[1457]: time="2025-01-13T21:23:28.787527117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:28.787641 containerd[1457]: time="2025-01-13T21:23:28.787598872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:28.787641 containerd[1457]: time="2025-01-13T21:23:28.787618088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.787857 containerd[1457]: time="2025-01-13T21:23:28.787711012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.788227 containerd[1457]: time="2025-01-13T21:23:28.787897792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:28.788227 containerd[1457]: time="2025-01-13T21:23:28.787948317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:28.788227 containerd[1457]: time="2025-01-13T21:23:28.787970699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.788227 containerd[1457]: time="2025-01-13T21:23:28.788058674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.790172 containerd[1457]: time="2025-01-13T21:23:28.789827962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:28.790172 containerd[1457]: time="2025-01-13T21:23:28.789880401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:28.790172 containerd[1457]: time="2025-01-13T21:23:28.789894888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.790172 containerd[1457]: time="2025-01-13T21:23:28.789969427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:28.814287 systemd[1]: Started cri-containerd-68f14a010f15f72e8b5367082f9cd529bdee43f4181f6023d246c3320e0035ba.scope - libcontainer container 68f14a010f15f72e8b5367082f9cd529bdee43f4181f6023d246c3320e0035ba. Jan 13 21:23:28.818849 systemd[1]: Started cri-containerd-92cb55120bd28a6911d992497c4a10e00647fce488b1a75cd1c3209e8bdb4ff1.scope - libcontainer container 92cb55120bd28a6911d992497c4a10e00647fce488b1a75cd1c3209e8bdb4ff1. Jan 13 21:23:28.821008 systemd[1]: Started cri-containerd-b0c71457d91d9089bd0e9e2ceae6d6ecec84a84f4083788c912a895184425a4b.scope - libcontainer container b0c71457d91d9089bd0e9e2ceae6d6ecec84a84f4083788c912a895184425a4b. Jan 13 21:23:28.861603 containerd[1457]: time="2025-01-13T21:23:28.861483482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a5735156e48dfe5696060f921ec2919,Namespace:kube-system,Attempt:0,} returns sandbox id \"68f14a010f15f72e8b5367082f9cd529bdee43f4181f6023d246c3320e0035ba\"" Jan 13 21:23:28.864881 kubelet[2229]: E0113 21:23:28.864083 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.868855 containerd[1457]: time="2025-01-13T21:23:28.868187554Z" level=info msg="CreateContainer within sandbox \"68f14a010f15f72e8b5367082f9cd529bdee43f4181f6023d246c3320e0035ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:23:28.870426 containerd[1457]: time="2025-01-13T21:23:28.870368965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"92cb55120bd28a6911d992497c4a10e00647fce488b1a75cd1c3209e8bdb4ff1\"" Jan 13 21:23:28.871875 kubelet[2229]: E0113 21:23:28.871844 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.874396 containerd[1457]: time="2025-01-13T21:23:28.874300568Z" level=info msg="CreateContainer within sandbox \"92cb55120bd28a6911d992497c4a10e00647fce488b1a75cd1c3209e8bdb4ff1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:23:28.875101 containerd[1457]: time="2025-01-13T21:23:28.874968762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0c71457d91d9089bd0e9e2ceae6d6ecec84a84f4083788c912a895184425a4b\"" Jan 13 21:23:28.875798 kubelet[2229]: E0113 21:23:28.875721 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.877708 containerd[1457]: time="2025-01-13T21:23:28.877679635Z" level=info msg="CreateContainer within sandbox \"b0c71457d91d9089bd0e9e2ceae6d6ecec84a84f4083788c912a895184425a4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:23:28.893935 containerd[1457]: time="2025-01-13T21:23:28.893877743Z" level=info msg="CreateContainer within sandbox \"68f14a010f15f72e8b5367082f9cd529bdee43f4181f6023d246c3320e0035ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dcc13643ff9d0a79a356958f61d39031f1d9c930509ff3e439c16bb8f4c5f5ea\"" Jan 13 21:23:28.894648 containerd[1457]: time="2025-01-13T21:23:28.894610567Z" level=info msg="StartContainer for \"dcc13643ff9d0a79a356958f61d39031f1d9c930509ff3e439c16bb8f4c5f5ea\"" Jan 13 21:23:28.897520 containerd[1457]: time="2025-01-13T21:23:28.897475179Z" level=info msg="CreateContainer within sandbox \"92cb55120bd28a6911d992497c4a10e00647fce488b1a75cd1c3209e8bdb4ff1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b1d60a194388f74ec19ecc9ca2feee0dacfb93b989c6a64c7b409e9b85b5dd2\"" Jan 13 21:23:28.898706 containerd[1457]: time="2025-01-13T21:23:28.897994553Z" level=info msg="StartContainer for \"0b1d60a194388f74ec19ecc9ca2feee0dacfb93b989c6a64c7b409e9b85b5dd2\"" Jan 13 21:23:28.908184 containerd[1457]: time="2025-01-13T21:23:28.908133117Z" level=info msg="CreateContainer within sandbox \"b0c71457d91d9089bd0e9e2ceae6d6ecec84a84f4083788c912a895184425a4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"051158beada6d23e4afcca731e8e7dd5a48c4bba5e145a43d5b6f592f905db23\"" Jan 13 21:23:28.909243 containerd[1457]: time="2025-01-13T21:23:28.909203574Z" level=info msg="StartContainer for \"051158beada6d23e4afcca731e8e7dd5a48c4bba5e145a43d5b6f592f905db23\"" Jan 13 21:23:28.928366 systemd[1]: Started cri-containerd-dcc13643ff9d0a79a356958f61d39031f1d9c930509ff3e439c16bb8f4c5f5ea.scope - libcontainer container dcc13643ff9d0a79a356958f61d39031f1d9c930509ff3e439c16bb8f4c5f5ea. Jan 13 21:23:28.933683 systemd[1]: Started cri-containerd-0b1d60a194388f74ec19ecc9ca2feee0dacfb93b989c6a64c7b409e9b85b5dd2.scope - libcontainer container 0b1d60a194388f74ec19ecc9ca2feee0dacfb93b989c6a64c7b409e9b85b5dd2. Jan 13 21:23:28.938433 systemd[1]: Started cri-containerd-051158beada6d23e4afcca731e8e7dd5a48c4bba5e145a43d5b6f592f905db23.scope - libcontainer container 051158beada6d23e4afcca731e8e7dd5a48c4bba5e145a43d5b6f592f905db23. Jan 13 21:23:28.979730 containerd[1457]: time="2025-01-13T21:23:28.979686725Z" level=info msg="StartContainer for \"dcc13643ff9d0a79a356958f61d39031f1d9c930509ff3e439c16bb8f4c5f5ea\" returns successfully" Jan 13 21:23:28.990631 kubelet[2229]: E0113 21:23:28.990527 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:28.998859 containerd[1457]: time="2025-01-13T21:23:28.998773450Z" level=info msg="StartContainer for \"051158beada6d23e4afcca731e8e7dd5a48c4bba5e145a43d5b6f592f905db23\" returns successfully" Jan 13 21:23:29.001568 containerd[1457]: time="2025-01-13T21:23:29.001520962Z" level=info msg="StartContainer for \"0b1d60a194388f74ec19ecc9ca2feee0dacfb93b989c6a64c7b409e9b85b5dd2\" returns successfully" Jan 13 21:23:29.962812 kubelet[2229]: E0113 21:23:29.962778 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:23:30.001262 kubelet[2229]: E0113 21:23:30.001234 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:30.002455 kubelet[2229]: E0113 21:23:30.002430 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:30.002533 kubelet[2229]: E0113 21:23:30.002507 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:30.066407 kubelet[2229]: I0113 21:23:30.066380 2229 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:30.072030 kubelet[2229]: I0113 21:23:30.071993 2229 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:23:30.076646 kubelet[2229]: E0113 21:23:30.076630 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.177136 kubelet[2229]: E0113 21:23:30.177070 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.277650 kubelet[2229]: E0113 21:23:30.277575 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.378051 kubelet[2229]: E0113 21:23:30.378028 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.478722 kubelet[2229]: E0113 21:23:30.478705 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.579433 kubelet[2229]: E0113 21:23:30.579328 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.679865 kubelet[2229]: E0113 21:23:30.679839 2229 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:23:30.952078 kubelet[2229]: I0113 21:23:30.951935 2229 apiserver.go:52] "Watching apiserver" Jan 13 21:23:30.956793 kubelet[2229]: I0113 21:23:30.956762 2229 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:23:31.007618 kubelet[2229]: E0113 21:23:31.007583 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:31.009235 kubelet[2229]: E0113 21:23:31.009216 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:31.009381 kubelet[2229]: E0113 21:23:31.009319 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:32.004391 kubelet[2229]: E0113 21:23:32.004335 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:32.004552 kubelet[2229]: E0113 21:23:32.004416 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:32.004663 kubelet[2229]: E0113 21:23:32.004633 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:32.507862 systemd[1]: Reloading requested from client PID 2514 ('systemctl') (unit session-9.scope)... Jan 13 21:23:32.507878 systemd[1]: Reloading... Jan 13 21:23:32.582171 zram_generator::config[2558]: No configuration found. Jan 13 21:23:32.679939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:23:32.768243 systemd[1]: Reloading finished in 259 ms. Jan 13 21:23:32.812641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:32.831370 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:23:32.831647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:32.831696 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 116.9M memory peak, 0B memory swap peak. Jan 13 21:23:32.839458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:23:32.985014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:23:32.995576 (kubelet)[2598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:23:33.037426 kubelet[2598]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:33.037426 kubelet[2598]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:23:33.037426 kubelet[2598]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:23:33.037426 kubelet[2598]: I0113 21:23:33.037046 2598 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:23:33.042748 kubelet[2598]: I0113 21:23:33.042721 2598 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:23:33.042748 kubelet[2598]: I0113 21:23:33.042742 2598 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:23:33.042919 kubelet[2598]: I0113 21:23:33.042907 2598 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:23:33.044226 kubelet[2598]: I0113 21:23:33.044205 2598 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:23:33.047180 kubelet[2598]: I0113 21:23:33.047080 2598 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:23:33.054425 kubelet[2598]: I0113 21:23:33.054365 2598 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:23:33.054587 kubelet[2598]: I0113 21:23:33.054564 2598 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:23:33.054746 kubelet[2598]: I0113 21:23:33.054723 2598 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:23:33.054837 kubelet[2598]: I0113 21:23:33.054750 2598 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:23:33.054837 kubelet[2598]: I0113 21:23:33.054760 2598 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:23:33.054837 kubelet[2598]: I0113 21:23:33.054791 2598 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:33.054917 kubelet[2598]: I0113 21:23:33.054890 2598 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:23:33.054917 kubelet[2598]: I0113 21:23:33.054903 2598 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:23:33.054888 sudo[2613]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:23:33.055292 kubelet[2598]: I0113 21:23:33.054928 2598 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:23:33.055292 kubelet[2598]: I0113 21:23:33.054942 2598 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:23:33.055235 sudo[2613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:23:33.057682 kubelet[2598]: I0113 21:23:33.055716 2598 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:23:33.057682 kubelet[2598]: I0113 21:23:33.056243 2598 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:23:33.057789 kubelet[2598]: I0113 21:23:33.057769 2598 server.go:1256] "Started kubelet" Jan 13 21:23:33.058048 kubelet[2598]: I0113 21:23:33.058016 2598 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:23:33.058993 kubelet[2598]: I0113 21:23:33.058964 2598 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:23:33.059519 kubelet[2598]: I0113 21:23:33.059496 2598 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:23:33.062813 kubelet[2598]: I0113 21:23:33.062789 2598 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:23:33.066460 kubelet[2598]: E0113 21:23:33.066431 2598 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:23:33.069821 kubelet[2598]: I0113 21:23:33.069797 2598 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:23:33.072535 kubelet[2598]: I0113 21:23:33.072507 2598 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:23:33.073980 kubelet[2598]: I0113 21:23:33.073953 2598 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:23:33.074161 kubelet[2598]: I0113 21:23:33.074140 2598 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:23:33.074876 kubelet[2598]: I0113 21:23:33.074776 2598 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:23:33.074876 kubelet[2598]: I0113 21:23:33.074863 2598 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:23:33.077171 kubelet[2598]: I0113 21:23:33.077151 2598 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:23:33.084076 kubelet[2598]: I0113 21:23:33.084039 2598 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:23:33.085305 kubelet[2598]: I0113 21:23:33.085228 2598 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:23:33.085305 kubelet[2598]: I0113 21:23:33.085250 2598 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:23:33.085305 kubelet[2598]: I0113 21:23:33.085267 2598 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:23:33.085305 kubelet[2598]: E0113 21:23:33.085307 2598 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:23:33.110331 kubelet[2598]: I0113 21:23:33.110277 2598 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:23:33.110331 kubelet[2598]: I0113 21:23:33.110300 2598 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:23:33.110331 kubelet[2598]: I0113 21:23:33.110317 2598 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:23:33.110495 kubelet[2598]: I0113 21:23:33.110456 2598 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:23:33.110495 kubelet[2598]: I0113 21:23:33.110479 2598 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:23:33.110495 kubelet[2598]: I0113 21:23:33.110485 2598 policy_none.go:49] "None policy: Start" Jan 13 21:23:33.111213 kubelet[2598]: I0113 21:23:33.111195 2598 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:23:33.111270 kubelet[2598]: I0113 21:23:33.111260 2598 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:23:33.111464 kubelet[2598]: I0113 21:23:33.111451 2598 state_mem.go:75] "Updated machine memory state" Jan 13 21:23:33.116483 kubelet[2598]: I0113 21:23:33.116029 2598 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:23:33.116483 kubelet[2598]: I0113 21:23:33.116300 2598 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:23:33.177049 kubelet[2598]: I0113 21:23:33.177019 2598 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:23:33.185759 kubelet[2598]: I0113 21:23:33.185713 2598 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:23:33.185870 kubelet[2598]: I0113 21:23:33.185810 2598 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:23:33.186462 kubelet[2598]: I0113 21:23:33.186065 2598 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:23:33.186462 kubelet[2598]: I0113 21:23:33.186152 2598 topology_manager.go:215] "Topology Admit Handler" podUID="0a5735156e48dfe5696060f921ec2919" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:23:33.186462 kubelet[2598]: I0113 21:23:33.186180 2598 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:23:33.196526 kubelet[2598]: E0113 21:23:33.196251 2598 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.196526 kubelet[2598]: E0113 21:23:33.196442 2598 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 21:23:33.197625 kubelet[2598]: E0113 21:23:33.197594 2598 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:33.275422 kubelet[2598]: I0113 21:23:33.275373 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:33.275422 kubelet[2598]: I0113 21:23:33.275424 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:33.275556 kubelet[2598]: I0113 21:23:33.275443 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.275556 kubelet[2598]: I0113 21:23:33.275462 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:23:33.275556 kubelet[2598]: I0113 21:23:33.275479 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5735156e48dfe5696060f921ec2919-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a5735156e48dfe5696060f921ec2919\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:33.275556 kubelet[2598]: I0113 21:23:33.275496 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.275556 kubelet[2598]: I0113 21:23:33.275522 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.275673 kubelet[2598]: I0113 21:23:33.275539 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.275673 kubelet[2598]: I0113 21:23:33.275560 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:23:33.498837 kubelet[2598]: E0113 21:23:33.498133 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:33.498837 kubelet[2598]: E0113 21:23:33.498421 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:33.498837 kubelet[2598]: E0113 21:23:33.498491 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:33.541969 sudo[2613]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:34.056180 kubelet[2598]: I0113 21:23:34.056142 2598 apiserver.go:52] "Watching apiserver" Jan 13 21:23:34.211123 kubelet[2598]: E0113 21:23:34.211081 2598 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:34.211536 kubelet[2598]: E0113 21:23:34.211519 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:34.303227 kubelet[2598]: E0113 21:23:34.303200 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:34.303677 kubelet[2598]: E0113 21:23:34.303588 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:34.319656 kubelet[2598]: I0113 21:23:34.319531 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.317905939 podStartE2EDuration="3.317905939s" podCreationTimestamp="2025-01-13 21:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:34.317719011 +0000 UTC m=+1.317735017" watchObservedRunningTime="2025-01-13 21:23:34.317905939 +0000 UTC m=+1.317921935" Jan 13 21:23:34.332356 kubelet[2598]: I0113 21:23:34.332304 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.332257474 podStartE2EDuration="3.332257474s" podCreationTimestamp="2025-01-13 21:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:34.331851448 +0000 UTC m=+1.331867464" watchObservedRunningTime="2025-01-13 21:23:34.332257474 +0000 UTC m=+1.332273470" Jan 13 21:23:34.332512 kubelet[2598]: I0113 21:23:34.332424 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.332404425 podStartE2EDuration="3.332404425s" podCreationTimestamp="2025-01-13 21:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:34.325987993 +0000 UTC m=+1.326003989" watchObservedRunningTime="2025-01-13 21:23:34.332404425 +0000 UTC m=+1.332420431" Jan 13 21:23:34.374586 kubelet[2598]: I0113 21:23:34.374535 2598 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:23:34.916502 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 13 21:23:34.918232 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:34.922434 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:55168.service: Deactivated successfully. Jan 13 21:23:34.924392 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:23:34.924601 systemd[1]: session-9.scope: Consumed 4.414s CPU time, 192.4M memory peak, 0B memory swap peak. Jan 13 21:23:34.925052 systemd-logind[1437]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:23:34.925884 systemd-logind[1437]: Removed session 9. Jan 13 21:23:35.133269 kubelet[2598]: E0113 21:23:35.133236 2598 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 21:23:35.133669 kubelet[2598]: E0113 21:23:35.133641 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:36.099486 kubelet[2598]: E0113 21:23:36.099434 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:37.025692 kubelet[2598]: E0113 21:23:37.025652 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:40.751039 kubelet[2598]: E0113 21:23:40.751009 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:41.106896 kubelet[2598]: E0113 21:23:41.106766 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:41.500216 kubelet[2598]: E0113 21:23:41.500194 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:42.108081 kubelet[2598]: E0113 21:23:42.108055 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:43.932308 update_engine[1440]: I20250113 21:23:43.932200 1440 update_attempter.cc:509] Updating boot flags... Jan 13 21:23:44.003146 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2682) Jan 13 21:23:44.035201 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2682) Jan 13 21:23:44.060782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2682) Jan 13 21:23:46.661678 kubelet[2598]: I0113 21:23:46.661637 2598 topology_manager.go:215] "Topology Admit Handler" podUID="c6bfe790-1431-4bab-8106-ecec7625acf2" podNamespace="kube-system" podName="kube-proxy-hjb2n" Jan 13 21:23:46.668881 kubelet[2598]: I0113 21:23:46.668845 2598 topology_manager.go:215] "Topology Admit Handler" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" podNamespace="kube-system" podName="cilium-mwhsg" Jan 13 21:23:46.674486 systemd[1]: Created slice kubepods-besteffort-podc6bfe790_1431_4bab_8106_ecec7625acf2.slice - libcontainer container kubepods-besteffort-podc6bfe790_1431_4bab_8106_ecec7625acf2.slice. Jan 13 21:23:46.696402 systemd[1]: Created slice kubepods-burstable-pod3bb7ed01_29cb_47b4_b660_8fa1076ee161.slice - libcontainer container kubepods-burstable-pod3bb7ed01_29cb_47b4_b660_8fa1076ee161.slice. Jan 13 21:23:46.719885 kubelet[2598]: I0113 21:23:46.719819 2598 topology_manager.go:215] "Topology Admit Handler" podUID="0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" podNamespace="kube-system" podName="cilium-operator-5cc964979-qw4pj" Jan 13 21:23:46.726973 systemd[1]: Created slice kubepods-besteffort-pod0f5bfa9f_6dfe_4d69_afbb_eb2a4f576204.slice - libcontainer container kubepods-besteffort-pod0f5bfa9f_6dfe_4d69_afbb_eb2a4f576204.slice. Jan 13 21:23:46.762042 kubelet[2598]: I0113 21:23:46.761974 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-run\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762042 kubelet[2598]: I0113 21:23:46.762046 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hubble-tls\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762234 kubelet[2598]: I0113 21:23:46.762104 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6bfe790-1431-4bab-8106-ecec7625acf2-lib-modules\") pod \"kube-proxy-hjb2n\" (UID: \"c6bfe790-1431-4bab-8106-ecec7625acf2\") " pod="kube-system/kube-proxy-hjb2n" Jan 13 21:23:46.762234 kubelet[2598]: I0113 21:23:46.762172 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hostproc\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762234 kubelet[2598]: I0113 21:23:46.762191 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-lib-modules\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762234 kubelet[2598]: I0113 21:23:46.762212 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-kernel\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762234 kubelet[2598]: I0113 21:23:46.762237 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wld2\" (UniqueName: \"kubernetes.io/projected/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-kube-api-access-6wld2\") pod \"cilium-operator-5cc964979-qw4pj\" (UID: \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\") " pod="kube-system/cilium-operator-5cc964979-qw4pj" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762259 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6bfe790-1431-4bab-8106-ecec7625acf2-xtables-lock\") pod \"kube-proxy-hjb2n\" (UID: \"c6bfe790-1431-4bab-8106-ecec7625acf2\") " pod="kube-system/kube-proxy-hjb2n" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762278 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-bpf-maps\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762297 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cni-path\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762317 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985jd\" (UniqueName: \"kubernetes.io/projected/c6bfe790-1431-4bab-8106-ecec7625acf2-kube-api-access-985jd\") pod \"kube-proxy-hjb2n\" (UID: \"c6bfe790-1431-4bab-8106-ecec7625acf2\") " pod="kube-system/kube-proxy-hjb2n" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762335 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-etc-cni-netd\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762366 kubelet[2598]: I0113 21:23:46.762368 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-config-path\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762514 kubelet[2598]: I0113 21:23:46.762387 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvk8\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-kube-api-access-cdvk8\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762514 kubelet[2598]: I0113 21:23:46.762407 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-xtables-lock\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762514 kubelet[2598]: I0113 21:23:46.762427 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-net\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762514 kubelet[2598]: I0113 21:23:46.762448 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bb7ed01-29cb-47b4-b660-8fa1076ee161-clustermesh-secrets\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.762514 kubelet[2598]: I0113 21:23:46.762473 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-cilium-config-path\") pod \"cilium-operator-5cc964979-qw4pj\" (UID: \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\") " pod="kube-system/cilium-operator-5cc964979-qw4pj" Jan 13 21:23:46.762632 kubelet[2598]: I0113 21:23:46.762496 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6bfe790-1431-4bab-8106-ecec7625acf2-kube-proxy\") pod \"kube-proxy-hjb2n\" (UID: \"c6bfe790-1431-4bab-8106-ecec7625acf2\") " pod="kube-system/kube-proxy-hjb2n" Jan 13 21:23:46.762632 kubelet[2598]: I0113 21:23:46.762515 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-cgroup\") pod \"cilium-mwhsg\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " pod="kube-system/cilium-mwhsg" Jan 13 21:23:46.768624 kubelet[2598]: I0113 21:23:46.768591 2598 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:23:46.768968 containerd[1457]: time="2025-01-13T21:23:46.768929302Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:23:46.769326 kubelet[2598]: I0113 21:23:46.769251 2598 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:23:46.991074 kubelet[2598]: E0113 21:23:46.990957 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:46.991561 containerd[1457]: time="2025-01-13T21:23:46.991519900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjb2n,Uid:c6bfe790-1431-4bab-8106-ecec7625acf2,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:46.999451 kubelet[2598]: E0113 21:23:46.999417 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.000044 containerd[1457]: time="2025-01-13T21:23:46.999752445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwhsg,Uid:3bb7ed01-29cb-47b4-b660-8fa1076ee161,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:47.018801 containerd[1457]: time="2025-01-13T21:23:47.018587727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:47.018801 containerd[1457]: time="2025-01-13T21:23:47.018635668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:47.018801 containerd[1457]: time="2025-01-13T21:23:47.018645667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.018801 containerd[1457]: time="2025-01-13T21:23:47.018718755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.024027 containerd[1457]: time="2025-01-13T21:23:47.023808118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:47.024027 containerd[1457]: time="2025-01-13T21:23:47.023861920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:47.024027 containerd[1457]: time="2025-01-13T21:23:47.023877699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.024620 containerd[1457]: time="2025-01-13T21:23:47.024556522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.030164 kubelet[2598]: E0113 21:23:47.029882 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.030164 kubelet[2598]: E0113 21:23:47.030152 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.030633 containerd[1457]: time="2025-01-13T21:23:47.030582526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qw4pj,Uid:0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204,Namespace:kube-system,Attempt:0,}" Jan 13 21:23:47.038283 systemd[1]: Started cri-containerd-3f578c30c08a3c26eab29cab03008c5bfe88ddd649ad427ffd4f557b22abab89.scope - libcontainer container 3f578c30c08a3c26eab29cab03008c5bfe88ddd649ad427ffd4f557b22abab89. Jan 13 21:23:47.042170 systemd[1]: Started cri-containerd-d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758.scope - libcontainer container d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758. Jan 13 21:23:47.070047 containerd[1457]: time="2025-01-13T21:23:47.069794736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hjb2n,Uid:c6bfe790-1431-4bab-8106-ecec7625acf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f578c30c08a3c26eab29cab03008c5bfe88ddd649ad427ffd4f557b22abab89\"" Jan 13 21:23:47.071073 kubelet[2598]: E0113 21:23:47.070914 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.071386 containerd[1457]: time="2025-01-13T21:23:47.071340268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwhsg,Uid:3bb7ed01-29cb-47b4-b660-8fa1076ee161,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\"" Jan 13 21:23:47.072273 kubelet[2598]: E0113 21:23:47.072246 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.073406 containerd[1457]: time="2025-01-13T21:23:47.073364726Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:23:47.073885 containerd[1457]: time="2025-01-13T21:23:47.073849272Z" level=info msg="CreateContainer within sandbox \"3f578c30c08a3c26eab29cab03008c5bfe88ddd649ad427ffd4f557b22abab89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:23:47.074212 containerd[1457]: time="2025-01-13T21:23:47.073771956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:47.074457 containerd[1457]: time="2025-01-13T21:23:47.074409672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:47.074496 containerd[1457]: time="2025-01-13T21:23:47.074431844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.074586 containerd[1457]: time="2025-01-13T21:23:47.074534818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:47.093661 containerd[1457]: time="2025-01-13T21:23:47.093580068Z" level=info msg="CreateContainer within sandbox \"3f578c30c08a3c26eab29cab03008c5bfe88ddd649ad427ffd4f557b22abab89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61aeb52bcfe8c5ff061d2646e55b94daa1553e066c6412315deb03d6671d124c\"" Jan 13 21:23:47.094152 containerd[1457]: time="2025-01-13T21:23:47.094091636Z" level=info msg="StartContainer for \"61aeb52bcfe8c5ff061d2646e55b94daa1553e066c6412315deb03d6671d124c\"" Jan 13 21:23:47.097250 systemd[1]: Started cri-containerd-7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67.scope - libcontainer container 7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67. Jan 13 21:23:47.125311 systemd[1]: Started cri-containerd-61aeb52bcfe8c5ff061d2646e55b94daa1553e066c6412315deb03d6671d124c.scope - libcontainer container 61aeb52bcfe8c5ff061d2646e55b94daa1553e066c6412315deb03d6671d124c. Jan 13 21:23:47.136054 containerd[1457]: time="2025-01-13T21:23:47.135763927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qw4pj,Uid:0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\"" Jan 13 21:23:47.136547 kubelet[2598]: E0113 21:23:47.136522 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:47.158899 containerd[1457]: time="2025-01-13T21:23:47.158747533Z" level=info msg="StartContainer for \"61aeb52bcfe8c5ff061d2646e55b94daa1553e066c6412315deb03d6671d124c\" returns successfully" Jan 13 21:23:48.123105 kubelet[2598]: E0113 21:23:48.123070 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:48.130083 kubelet[2598]: I0113 21:23:48.129825 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hjb2n" podStartSLOduration=2.129786588 podStartE2EDuration="2.129786588s" podCreationTimestamp="2025-01-13 21:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:48.129316971 +0000 UTC m=+15.129332967" watchObservedRunningTime="2025-01-13 21:23:48.129786588 +0000 UTC m=+15.129802584" Jan 13 21:23:49.124630 kubelet[2598]: E0113 21:23:49.124598 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:57.106810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946915950.mount: Deactivated successfully. Jan 13 21:23:58.748089 containerd[1457]: time="2025-01-13T21:23:58.748043852Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:58.748938 containerd[1457]: time="2025-01-13T21:23:58.748891077Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734743" Jan 13 21:23:58.750050 containerd[1457]: time="2025-01-13T21:23:58.750017638Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:23:58.751614 containerd[1457]: time="2025-01-13T21:23:58.751582074Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.678167775s" Jan 13 21:23:58.751671 containerd[1457]: time="2025-01-13T21:23:58.751621248Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:23:58.754390 containerd[1457]: time="2025-01-13T21:23:58.754297246Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:23:58.756014 containerd[1457]: time="2025-01-13T21:23:58.755967441Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:23:58.770750 containerd[1457]: time="2025-01-13T21:23:58.770704540Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\"" Jan 13 21:23:58.771216 containerd[1457]: time="2025-01-13T21:23:58.771175457Z" level=info msg="StartContainer for \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\"" Jan 13 21:23:58.801254 systemd[1]: Started cri-containerd-0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005.scope - libcontainer container 0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005. Jan 13 21:23:58.826304 containerd[1457]: time="2025-01-13T21:23:58.826259495Z" level=info msg="StartContainer for \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\" returns successfully" Jan 13 21:23:58.836161 systemd[1]: cri-containerd-0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005.scope: Deactivated successfully. Jan 13 21:23:59.290661 containerd[1457]: time="2025-01-13T21:23:59.288251784Z" level=info msg="shim disconnected" id=0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005 namespace=k8s.io Jan 13 21:23:59.290661 containerd[1457]: time="2025-01-13T21:23:59.290649888Z" level=warning msg="cleaning up after shim disconnected" id=0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005 namespace=k8s.io Jan 13 21:23:59.290661 containerd[1457]: time="2025-01-13T21:23:59.290660589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:23:59.695537 kubelet[2598]: E0113 21:23:59.695405 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:23:59.698173 containerd[1457]: time="2025-01-13T21:23:59.698103748Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:23:59.766800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005-rootfs.mount: Deactivated successfully. Jan 13 21:23:59.938700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575508498.mount: Deactivated successfully. Jan 13 21:23:59.999287 containerd[1457]: time="2025-01-13T21:23:59.999232918Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\"" Jan 13 21:24:00.000391 containerd[1457]: time="2025-01-13T21:24:00.000348388Z" level=info msg="StartContainer for \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\"" Jan 13 21:24:00.042245 systemd[1]: Started cri-containerd-666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635.scope - libcontainer container 666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635. Jan 13 21:24:00.068398 containerd[1457]: time="2025-01-13T21:24:00.068347573Z" level=info msg="StartContainer for \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\" returns successfully" Jan 13 21:24:00.080240 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:24:00.080512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:00.080600 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:00.086521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:00.086742 systemd[1]: cri-containerd-666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635.scope: Deactivated successfully. Jan 13 21:24:00.109942 containerd[1457]: time="2025-01-13T21:24:00.109877934Z" level=info msg="shim disconnected" id=666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635 namespace=k8s.io Jan 13 21:24:00.109942 containerd[1457]: time="2025-01-13T21:24:00.109938147Z" level=warning msg="cleaning up after shim disconnected" id=666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635 namespace=k8s.io Jan 13 21:24:00.109942 containerd[1457]: time="2025-01-13T21:24:00.109947234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:00.110993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:00.517158 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:54640.service - OpenSSH per-connection server daemon (10.0.0.1:54640). Jan 13 21:24:00.583327 sshd[3142]: Accepted publickey for core from 10.0.0.1 port 54640 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:00.585275 sshd[3142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:00.590195 systemd-logind[1437]: New session 10 of user core. Jan 13 21:24:00.597255 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:24:00.715065 kubelet[2598]: E0113 21:24:00.715037 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:00.732795 containerd[1457]: time="2025-01-13T21:24:00.732584118Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:24:00.745165 sshd[3142]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:00.750990 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:54640.service: Deactivated successfully. Jan 13 21:24:00.753230 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:24:00.754151 systemd-logind[1437]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:24:00.755850 systemd-logind[1437]: Removed session 10. Jan 13 21:24:00.762319 containerd[1457]: time="2025-01-13T21:24:00.762275787Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\"" Jan 13 21:24:00.762896 containerd[1457]: time="2025-01-13T21:24:00.762857713Z" level=info msg="StartContainer for \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\"" Jan 13 21:24:00.766889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635-rootfs.mount: Deactivated successfully. Jan 13 21:24:00.790479 systemd[1]: run-containerd-runc-k8s.io-5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd-runc.X9xKZn.mount: Deactivated successfully. Jan 13 21:24:00.799366 systemd[1]: Started cri-containerd-5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd.scope - libcontainer container 5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd. Jan 13 21:24:00.831900 containerd[1457]: time="2025-01-13T21:24:00.831864805Z" level=info msg="StartContainer for \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\" returns successfully" Jan 13 21:24:00.834307 systemd[1]: cri-containerd-5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd.scope: Deactivated successfully. Jan 13 21:24:00.858157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd-rootfs.mount: Deactivated successfully. Jan 13 21:24:01.041061 containerd[1457]: time="2025-01-13T21:24:01.040985454Z" level=info msg="shim disconnected" id=5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd namespace=k8s.io Jan 13 21:24:01.041061 containerd[1457]: time="2025-01-13T21:24:01.041038293Z" level=warning msg="cleaning up after shim disconnected" id=5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd namespace=k8s.io Jan 13 21:24:01.041061 containerd[1457]: time="2025-01-13T21:24:01.041047090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:01.070065 containerd[1457]: time="2025-01-13T21:24:01.069979499Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:01.070773 containerd[1457]: time="2025-01-13T21:24:01.070718429Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907185" Jan 13 21:24:01.071761 containerd[1457]: time="2025-01-13T21:24:01.071728689Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:01.073019 containerd[1457]: time="2025-01-13T21:24:01.072980785Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.318644787s" Jan 13 21:24:01.073053 containerd[1457]: time="2025-01-13T21:24:01.073018427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:24:01.074573 containerd[1457]: time="2025-01-13T21:24:01.074539398Z" level=info msg="CreateContainer within sandbox \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:24:01.086246 containerd[1457]: time="2025-01-13T21:24:01.086202854Z" level=info msg="CreateContainer within sandbox \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\"" Jan 13 21:24:01.086626 containerd[1457]: time="2025-01-13T21:24:01.086555548Z" level=info msg="StartContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\"" Jan 13 21:24:01.118243 systemd[1]: Started cri-containerd-b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8.scope - libcontainer container b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8. Jan 13 21:24:01.142260 containerd[1457]: time="2025-01-13T21:24:01.142217902Z" level=info msg="StartContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" returns successfully" Jan 13 21:24:01.715382 kubelet[2598]: E0113 21:24:01.715351 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:01.716789 kubelet[2598]: E0113 21:24:01.716770 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:01.718218 containerd[1457]: time="2025-01-13T21:24:01.718098392Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:24:01.734416 containerd[1457]: time="2025-01-13T21:24:01.734315235Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\"" Jan 13 21:24:01.734897 containerd[1457]: time="2025-01-13T21:24:01.734865730Z" level=info msg="StartContainer for \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\"" Jan 13 21:24:01.742126 kubelet[2598]: I0113 21:24:01.741806 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qw4pj" podStartSLOduration=1.805828977 podStartE2EDuration="15.741766265s" podCreationTimestamp="2025-01-13 21:23:46 +0000 UTC" firstStartedPulling="2025-01-13 21:23:47.1372571 +0000 UTC m=+14.137273096" lastFinishedPulling="2025-01-13 21:24:01.073194388 +0000 UTC m=+28.073210384" observedRunningTime="2025-01-13 21:24:01.741485717 +0000 UTC m=+28.741501713" watchObservedRunningTime="2025-01-13 21:24:01.741766265 +0000 UTC m=+28.741782261" Jan 13 21:24:01.792346 systemd[1]: Started cri-containerd-a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e.scope - libcontainer container a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e. Jan 13 21:24:01.827057 systemd[1]: cri-containerd-a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e.scope: Deactivated successfully. Jan 13 21:24:01.828765 containerd[1457]: time="2025-01-13T21:24:01.828725055Z" level=info msg="StartContainer for \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\" returns successfully" Jan 13 21:24:01.847532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e-rootfs.mount: Deactivated successfully. Jan 13 21:24:01.864263 containerd[1457]: time="2025-01-13T21:24:01.864188453Z" level=info msg="shim disconnected" id=a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e namespace=k8s.io Jan 13 21:24:01.864263 containerd[1457]: time="2025-01-13T21:24:01.864256931Z" level=warning msg="cleaning up after shim disconnected" id=a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e namespace=k8s.io Jan 13 21:24:01.864263 containerd[1457]: time="2025-01-13T21:24:01.864272310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:02.720702 kubelet[2598]: E0113 21:24:02.720669 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:02.720702 kubelet[2598]: E0113 21:24:02.720683 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:02.723798 containerd[1457]: time="2025-01-13T21:24:02.723759549Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:24:02.741338 containerd[1457]: time="2025-01-13T21:24:02.741292262Z" level=info msg="CreateContainer within sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\"" Jan 13 21:24:02.741751 containerd[1457]: time="2025-01-13T21:24:02.741720879Z" level=info msg="StartContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\"" Jan 13 21:24:02.770250 systemd[1]: Started cri-containerd-f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1.scope - libcontainer container f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1. Jan 13 21:24:02.805530 containerd[1457]: time="2025-01-13T21:24:02.805391956Z" level=info msg="StartContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" returns successfully" Jan 13 21:24:02.923280 kubelet[2598]: I0113 21:24:02.923249 2598 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:24:02.944926 kubelet[2598]: I0113 21:24:02.944878 2598 topology_manager.go:215] "Topology Admit Handler" podUID="7a4e7c22-09d3-4995-99cc-2ace4a02e774" podNamespace="kube-system" podName="coredns-76f75df574-hclcn" Jan 13 21:24:02.946443 kubelet[2598]: I0113 21:24:02.946392 2598 topology_manager.go:215] "Topology Admit Handler" podUID="4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a" podNamespace="kube-system" podName="coredns-76f75df574-mh5g7" Jan 13 21:24:02.954240 systemd[1]: Created slice kubepods-burstable-pod7a4e7c22_09d3_4995_99cc_2ace4a02e774.slice - libcontainer container kubepods-burstable-pod7a4e7c22_09d3_4995_99cc_2ace4a02e774.slice. Jan 13 21:24:02.960953 systemd[1]: Created slice kubepods-burstable-pod4ebbaaac_c7fd_4dac_89c6_d63be1b6b92a.slice - libcontainer container kubepods-burstable-pod4ebbaaac_c7fd_4dac_89c6_d63be1b6b92a.slice. Jan 13 21:24:03.017690 kubelet[2598]: I0113 21:24:03.017656 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk59d\" (UniqueName: \"kubernetes.io/projected/7a4e7c22-09d3-4995-99cc-2ace4a02e774-kube-api-access-vk59d\") pod \"coredns-76f75df574-hclcn\" (UID: \"7a4e7c22-09d3-4995-99cc-2ace4a02e774\") " pod="kube-system/coredns-76f75df574-hclcn" Jan 13 21:24:03.017690 kubelet[2598]: I0113 21:24:03.017698 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z7g8\" (UniqueName: \"kubernetes.io/projected/4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a-kube-api-access-9z7g8\") pod \"coredns-76f75df574-mh5g7\" (UID: \"4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a\") " pod="kube-system/coredns-76f75df574-mh5g7" Jan 13 21:24:03.017862 kubelet[2598]: I0113 21:24:03.017724 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4e7c22-09d3-4995-99cc-2ace4a02e774-config-volume\") pod \"coredns-76f75df574-hclcn\" (UID: \"7a4e7c22-09d3-4995-99cc-2ace4a02e774\") " pod="kube-system/coredns-76f75df574-hclcn" Jan 13 21:24:03.017862 kubelet[2598]: I0113 21:24:03.017743 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a-config-volume\") pod \"coredns-76f75df574-mh5g7\" (UID: \"4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a\") " pod="kube-system/coredns-76f75df574-mh5g7" Jan 13 21:24:03.258871 kubelet[2598]: E0113 21:24:03.258841 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:03.259435 containerd[1457]: time="2025-01-13T21:24:03.259407545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hclcn,Uid:7a4e7c22-09d3-4995-99cc-2ace4a02e774,Namespace:kube-system,Attempt:0,}" Jan 13 21:24:03.262884 kubelet[2598]: E0113 21:24:03.262817 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:03.263899 containerd[1457]: time="2025-01-13T21:24:03.263854780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh5g7,Uid:4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a,Namespace:kube-system,Attempt:0,}" Jan 13 21:24:03.725211 kubelet[2598]: E0113 21:24:03.725181 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:03.736255 kubelet[2598]: I0113 21:24:03.735889 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mwhsg" podStartSLOduration=6.054639207 podStartE2EDuration="17.735855585s" podCreationTimestamp="2025-01-13 21:23:46 +0000 UTC" firstStartedPulling="2025-01-13 21:23:47.072876243 +0000 UTC m=+14.072892239" lastFinishedPulling="2025-01-13 21:23:58.754092621 +0000 UTC m=+25.754108617" observedRunningTime="2025-01-13 21:24:03.735425154 +0000 UTC m=+30.735441151" watchObservedRunningTime="2025-01-13 21:24:03.735855585 +0000 UTC m=+30.735871581" Jan 13 21:24:04.726897 kubelet[2598]: E0113 21:24:04.726862 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:04.985668 systemd-networkd[1394]: cilium_host: Link UP Jan 13 21:24:04.986372 systemd-networkd[1394]: cilium_net: Link UP Jan 13 21:24:04.986382 systemd-networkd[1394]: cilium_net: Gained carrier Jan 13 21:24:04.986669 systemd-networkd[1394]: cilium_host: Gained carrier Jan 13 21:24:05.092033 systemd-networkd[1394]: cilium_vxlan: Link UP Jan 13 21:24:05.092229 systemd-networkd[1394]: cilium_vxlan: Gained carrier Jan 13 21:24:05.297149 kernel: NET: Registered PF_ALG protocol family Jan 13 21:24:05.460699 systemd-networkd[1394]: cilium_net: Gained IPv6LL Jan 13 21:24:05.566286 systemd-networkd[1394]: cilium_host: Gained IPv6LL Jan 13 21:24:05.728049 kubelet[2598]: E0113 21:24:05.728010 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:05.758085 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:54650.service - OpenSSH per-connection server daemon (10.0.0.1:54650). Jan 13 21:24:05.799950 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 54650 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:05.801534 sshd[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:05.805579 systemd-logind[1437]: New session 11 of user core. Jan 13 21:24:05.810250 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:24:05.944042 sshd[3700]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:05.953614 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:54650.service: Deactivated successfully. Jan 13 21:24:05.956647 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:24:05.957626 systemd-networkd[1394]: lxc_health: Link UP Jan 13 21:24:05.965179 systemd-logind[1437]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:24:05.977516 systemd-logind[1437]: Removed session 11. Jan 13 21:24:05.985062 systemd-networkd[1394]: lxc_health: Gained carrier Jan 13 21:24:06.143202 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Jan 13 21:24:06.343348 systemd-networkd[1394]: lxc9e21d0fd57f3: Link UP Jan 13 21:24:06.353149 kernel: eth0: renamed from tmpae446 Jan 13 21:24:06.370138 kernel: eth0: renamed from tmp7d697 Jan 13 21:24:06.376642 systemd-networkd[1394]: lxc2a767ab12511: Link UP Jan 13 21:24:06.380783 systemd-networkd[1394]: lxc2a767ab12511: Gained carrier Jan 13 21:24:06.383224 systemd-networkd[1394]: lxc9e21d0fd57f3: Gained carrier Jan 13 21:24:07.000950 kubelet[2598]: E0113 21:24:07.000909 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:07.101463 systemd-networkd[1394]: lxc_health: Gained IPv6LL Jan 13 21:24:07.548250 systemd-networkd[1394]: lxc2a767ab12511: Gained IPv6LL Jan 13 21:24:07.548574 systemd-networkd[1394]: lxc9e21d0fd57f3: Gained IPv6LL Jan 13 21:24:07.731856 kubelet[2598]: E0113 21:24:07.731811 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:09.920445 containerd[1457]: time="2025-01-13T21:24:09.919543349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:09.920445 containerd[1457]: time="2025-01-13T21:24:09.920208539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:09.920445 containerd[1457]: time="2025-01-13T21:24:09.920233095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:09.920445 containerd[1457]: time="2025-01-13T21:24:09.920348382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:09.943260 systemd[1]: Started cri-containerd-ae4463c6372f725a540b7324db17171d8d4fa10fbf52c7a847db36728883d9c5.scope - libcontainer container ae4463c6372f725a540b7324db17171d8d4fa10fbf52c7a847db36728883d9c5. Jan 13 21:24:09.944864 containerd[1457]: time="2025-01-13T21:24:09.944762467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:09.944864 containerd[1457]: time="2025-01-13T21:24:09.944806169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:09.944864 containerd[1457]: time="2025-01-13T21:24:09.944821999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:09.945034 containerd[1457]: time="2025-01-13T21:24:09.944894025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:09.959502 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:24:09.965240 systemd[1]: Started cri-containerd-7d697ab8e85e2ecddc5577942523f8e97f0834992eee549d314abe46597ea57c.scope - libcontainer container 7d697ab8e85e2ecddc5577942523f8e97f0834992eee549d314abe46597ea57c. Jan 13 21:24:09.978240 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:24:09.994508 containerd[1457]: time="2025-01-13T21:24:09.994374783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh5g7,Uid:4ebbaaac-c7fd-4dac-89c6-d63be1b6b92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae4463c6372f725a540b7324db17171d8d4fa10fbf52c7a847db36728883d9c5\"" Jan 13 21:24:09.995122 kubelet[2598]: E0113 21:24:09.995069 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:09.998141 containerd[1457]: time="2025-01-13T21:24:09.997952455Z" level=info msg="CreateContainer within sandbox \"ae4463c6372f725a540b7324db17171d8d4fa10fbf52c7a847db36728883d9c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:24:10.001908 containerd[1457]: time="2025-01-13T21:24:10.001840972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hclcn,Uid:7a4e7c22-09d3-4995-99cc-2ace4a02e774,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d697ab8e85e2ecddc5577942523f8e97f0834992eee549d314abe46597ea57c\"" Jan 13 21:24:10.002550 kubelet[2598]: E0113 21:24:10.002529 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:10.003855 containerd[1457]: time="2025-01-13T21:24:10.003824398Z" level=info msg="CreateContainer within sandbox \"7d697ab8e85e2ecddc5577942523f8e97f0834992eee549d314abe46597ea57c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:24:10.200072 containerd[1457]: time="2025-01-13T21:24:10.199952001Z" level=info msg="CreateContainer within sandbox \"7d697ab8e85e2ecddc5577942523f8e97f0834992eee549d314abe46597ea57c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7d14bddb556e1ea88d8b9448ea2d68115362b416a946527fa22a8b6ff215feb\"" Jan 13 21:24:10.200601 containerd[1457]: time="2025-01-13T21:24:10.200397348Z" level=info msg="StartContainer for \"f7d14bddb556e1ea88d8b9448ea2d68115362b416a946527fa22a8b6ff215feb\"" Jan 13 21:24:10.233242 systemd[1]: Started cri-containerd-f7d14bddb556e1ea88d8b9448ea2d68115362b416a946527fa22a8b6ff215feb.scope - libcontainer container f7d14bddb556e1ea88d8b9448ea2d68115362b416a946527fa22a8b6ff215feb. Jan 13 21:24:10.252788 containerd[1457]: time="2025-01-13T21:24:10.252702822Z" level=info msg="CreateContainer within sandbox \"ae4463c6372f725a540b7324db17171d8d4fa10fbf52c7a847db36728883d9c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c724e64b810032ef629c8503ae4c2780a74a0e6feb7ae011ea25f8915d57befa\"" Jan 13 21:24:10.253290 containerd[1457]: time="2025-01-13T21:24:10.253259818Z" level=info msg="StartContainer for \"c724e64b810032ef629c8503ae4c2780a74a0e6feb7ae011ea25f8915d57befa\"" Jan 13 21:24:10.288235 systemd[1]: Started cri-containerd-c724e64b810032ef629c8503ae4c2780a74a0e6feb7ae011ea25f8915d57befa.scope - libcontainer container c724e64b810032ef629c8503ae4c2780a74a0e6feb7ae011ea25f8915d57befa. Jan 13 21:24:10.288809 containerd[1457]: time="2025-01-13T21:24:10.288768904Z" level=info msg="StartContainer for \"f7d14bddb556e1ea88d8b9448ea2d68115362b416a946527fa22a8b6ff215feb\" returns successfully" Jan 13 21:24:10.323293 containerd[1457]: time="2025-01-13T21:24:10.323253917Z" level=info msg="StartContainer for \"c724e64b810032ef629c8503ae4c2780a74a0e6feb7ae011ea25f8915d57befa\" returns successfully" Jan 13 21:24:10.740676 kubelet[2598]: E0113 21:24:10.740448 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:10.742425 kubelet[2598]: E0113 21:24:10.742318 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:10.751049 kubelet[2598]: I0113 21:24:10.751009 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hclcn" podStartSLOduration=24.750942166 podStartE2EDuration="24.750942166s" podCreationTimestamp="2025-01-13 21:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:10.749838392 +0000 UTC m=+37.749854388" watchObservedRunningTime="2025-01-13 21:24:10.750942166 +0000 UTC m=+37.750958162" Jan 13 21:24:10.955931 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:45828.service - OpenSSH per-connection server daemon (10.0.0.1:45828). Jan 13 21:24:10.990606 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 45828 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:10.991969 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:10.996044 systemd-logind[1437]: New session 12 of user core. Jan 13 21:24:11.005314 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:24:11.115853 sshd[4018]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:11.119703 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:45828.service: Deactivated successfully. Jan 13 21:24:11.121645 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:24:11.122375 systemd-logind[1437]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:24:11.123218 systemd-logind[1437]: Removed session 12. Jan 13 21:24:11.743093 kubelet[2598]: E0113 21:24:11.743056 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:11.743093 kubelet[2598]: E0113 21:24:11.743100 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:12.744693 kubelet[2598]: E0113 21:24:12.744650 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:12.745157 kubelet[2598]: E0113 21:24:12.744729 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:16.134714 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:45836.service - OpenSSH per-connection server daemon (10.0.0.1:45836). Jan 13 21:24:16.165521 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 45836 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:16.166814 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:16.170444 systemd-logind[1437]: New session 13 of user core. Jan 13 21:24:16.184243 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:24:16.289012 sshd[4035]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:16.300520 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:45836.service: Deactivated successfully. Jan 13 21:24:16.302688 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:24:16.304439 systemd-logind[1437]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:24:16.319406 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:45840.service - OpenSSH per-connection server daemon (10.0.0.1:45840). Jan 13 21:24:16.320353 systemd-logind[1437]: Removed session 13. Jan 13 21:24:16.348004 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 45840 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:16.349518 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:16.353363 systemd-logind[1437]: New session 14 of user core. Jan 13 21:24:16.363242 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:24:16.499767 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:16.509914 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:45840.service: Deactivated successfully. Jan 13 21:24:16.514495 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:24:16.518389 systemd-logind[1437]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:24:16.532467 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:45856.service - OpenSSH per-connection server daemon (10.0.0.1:45856). Jan 13 21:24:16.533502 systemd-logind[1437]: Removed session 14. Jan 13 21:24:16.563585 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 45856 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:16.565082 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:16.569262 systemd-logind[1437]: New session 15 of user core. Jan 13 21:24:16.578264 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:24:16.679208 sshd[4062]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:16.682648 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:45856.service: Deactivated successfully. Jan 13 21:24:16.684599 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:24:16.685143 systemd-logind[1437]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:24:16.685917 systemd-logind[1437]: Removed session 15. Jan 13 21:24:21.693379 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Jan 13 21:24:21.724363 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:21.725948 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:21.729464 systemd-logind[1437]: New session 16 of user core. Jan 13 21:24:21.739247 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:24:21.844261 sshd[4079]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:21.847964 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:54390.service: Deactivated successfully. Jan 13 21:24:21.849773 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:24:21.850556 systemd-logind[1437]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:24:21.851620 systemd-logind[1437]: Removed session 16. Jan 13 21:24:26.854851 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:54392.service - OpenSSH per-connection server daemon (10.0.0.1:54392). Jan 13 21:24:26.885657 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 54392 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:26.887154 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:26.890643 systemd-logind[1437]: New session 17 of user core. Jan 13 21:24:26.897235 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:24:26.998362 sshd[4094]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:27.008770 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:54392.service: Deactivated successfully. Jan 13 21:24:27.010517 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:24:27.011986 systemd-logind[1437]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:24:27.013339 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:54404.service - OpenSSH per-connection server daemon (10.0.0.1:54404). Jan 13 21:24:27.014157 systemd-logind[1437]: Removed session 17. Jan 13 21:24:27.044104 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 54404 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:27.045511 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:27.048899 systemd-logind[1437]: New session 18 of user core. Jan 13 21:24:27.056219 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:24:27.221260 sshd[4109]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:27.231901 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:54404.service: Deactivated successfully. Jan 13 21:24:27.233640 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:24:27.235094 systemd-logind[1437]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:24:27.243337 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:54406.service - OpenSSH per-connection server daemon (10.0.0.1:54406). Jan 13 21:24:27.244241 systemd-logind[1437]: Removed session 18. Jan 13 21:24:27.273794 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:27.275160 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:27.278834 systemd-logind[1437]: New session 19 of user core. Jan 13 21:24:27.291209 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:24:28.613933 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:28.625130 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:54406.service: Deactivated successfully. Jan 13 21:24:28.626849 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:24:28.628419 systemd-logind[1437]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:24:28.637004 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:35096.service - OpenSSH per-connection server daemon (10.0.0.1:35096). Jan 13 21:24:28.637954 systemd-logind[1437]: Removed session 19. Jan 13 21:24:28.664560 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 35096 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:28.666155 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:28.669880 systemd-logind[1437]: New session 20 of user core. Jan 13 21:24:28.679232 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:24:28.884057 sshd[4142]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:28.893171 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:35096.service: Deactivated successfully. Jan 13 21:24:28.895019 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:24:28.896702 systemd-logind[1437]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:24:28.897967 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:35108.service - OpenSSH per-connection server daemon (10.0.0.1:35108). Jan 13 21:24:28.898904 systemd-logind[1437]: Removed session 20. Jan 13 21:24:28.929389 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 35108 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:28.930714 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:28.934373 systemd-logind[1437]: New session 21 of user core. Jan 13 21:24:28.945226 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:24:29.049032 sshd[4155]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:29.053744 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:35108.service: Deactivated successfully. Jan 13 21:24:29.055792 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:24:29.056458 systemd-logind[1437]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:24:29.057314 systemd-logind[1437]: Removed session 21. Jan 13 21:24:34.061281 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:35124.service - OpenSSH per-connection server daemon (10.0.0.1:35124). Jan 13 21:24:34.097172 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 35124 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:34.099322 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:34.105969 systemd-logind[1437]: New session 22 of user core. Jan 13 21:24:34.115247 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:24:34.220021 sshd[4171]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:34.223790 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:35124.service: Deactivated successfully. Jan 13 21:24:34.225618 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:24:34.226261 systemd-logind[1437]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:24:34.227250 systemd-logind[1437]: Removed session 22. Jan 13 21:24:39.233133 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:36668.service - OpenSSH per-connection server daemon (10.0.0.1:36668). Jan 13 21:24:39.266172 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 36668 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:39.267616 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:39.271293 systemd-logind[1437]: New session 23 of user core. Jan 13 21:24:39.280276 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:24:39.382582 sshd[4188]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:39.386657 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:36668.service: Deactivated successfully. Jan 13 21:24:39.388703 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:24:39.389297 systemd-logind[1437]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:24:39.390139 systemd-logind[1437]: Removed session 23. Jan 13 21:24:44.398259 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:36670.service - OpenSSH per-connection server daemon (10.0.0.1:36670). Jan 13 21:24:44.430368 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 36670 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.431835 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.435952 systemd-logind[1437]: New session 24 of user core. Jan 13 21:24:44.445237 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:24:44.551594 sshd[4202]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.555790 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:36670.service: Deactivated successfully. Jan 13 21:24:44.557908 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:24:44.558651 systemd-logind[1437]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:24:44.559570 systemd-logind[1437]: Removed session 24. Jan 13 21:24:49.563589 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:50760.service - OpenSSH per-connection server daemon (10.0.0.1:50760). Jan 13 21:24:49.595484 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 50760 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:49.597327 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:49.601404 systemd-logind[1437]: New session 25 of user core. Jan 13 21:24:49.612299 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:24:49.722864 sshd[4218]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:49.738181 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:50760.service: Deactivated successfully. Jan 13 21:24:49.740526 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:24:49.742314 systemd-logind[1437]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:24:49.751484 systemd[1]: Started sshd@25-10.0.0.82:22-10.0.0.1:50770.service - OpenSSH per-connection server daemon (10.0.0.1:50770). Jan 13 21:24:49.752677 systemd-logind[1437]: Removed session 25. Jan 13 21:24:49.779634 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 50770 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:49.781272 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:49.785823 systemd-logind[1437]: New session 26 of user core. Jan 13 21:24:49.797327 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:24:51.086190 kubelet[2598]: E0113 21:24:51.086145 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:51.155042 kubelet[2598]: I0113 21:24:51.154259 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mh5g7" podStartSLOduration=65.154208824 podStartE2EDuration="1m5.154208824s" podCreationTimestamp="2025-01-13 21:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:10.77015824 +0000 UTC m=+37.770174236" watchObservedRunningTime="2025-01-13 21:24:51.154208824 +0000 UTC m=+78.154224820" Jan 13 21:24:51.161686 containerd[1457]: time="2025-01-13T21:24:51.161046962Z" level=info msg="StopContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" with timeout 30 (s)" Jan 13 21:24:51.162268 containerd[1457]: time="2025-01-13T21:24:51.162241187Z" level=info msg="Stop container \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" with signal terminated" Jan 13 21:24:51.192499 systemd[1]: cri-containerd-b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8.scope: Deactivated successfully. Jan 13 21:24:51.206308 containerd[1457]: time="2025-01-13T21:24:51.206253577Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:24:51.212830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8-rootfs.mount: Deactivated successfully. Jan 13 21:24:51.214819 containerd[1457]: time="2025-01-13T21:24:51.214778162Z" level=info msg="StopContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" with timeout 2 (s)" Jan 13 21:24:51.214992 containerd[1457]: time="2025-01-13T21:24:51.214973025Z" level=info msg="Stop container \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" with signal terminated" Jan 13 21:24:51.221251 systemd-networkd[1394]: lxc_health: Link DOWN Jan 13 21:24:51.221260 systemd-networkd[1394]: lxc_health: Lost carrier Jan 13 21:24:51.229806 containerd[1457]: time="2025-01-13T21:24:51.229744749Z" level=info msg="shim disconnected" id=b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8 namespace=k8s.io Jan 13 21:24:51.229806 containerd[1457]: time="2025-01-13T21:24:51.229802710Z" level=warning msg="cleaning up after shim disconnected" id=b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8 namespace=k8s.io Jan 13 21:24:51.229980 containerd[1457]: time="2025-01-13T21:24:51.229813300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:51.241399 systemd[1]: cri-containerd-f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1.scope: Deactivated successfully. Jan 13 21:24:51.241735 systemd[1]: cri-containerd-f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1.scope: Consumed 6.785s CPU time. Jan 13 21:24:51.250949 containerd[1457]: time="2025-01-13T21:24:51.250907256Z" level=info msg="StopContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" returns successfully" Jan 13 21:24:51.251588 containerd[1457]: time="2025-01-13T21:24:51.251562579Z" level=info msg="StopPodSandbox for \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\"" Jan 13 21:24:51.251664 containerd[1457]: time="2025-01-13T21:24:51.251598719Z" level=info msg="Container to stop \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.254391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67-shm.mount: Deactivated successfully. Jan 13 21:24:51.260381 systemd[1]: cri-containerd-7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67.scope: Deactivated successfully. Jan 13 21:24:51.264237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1-rootfs.mount: Deactivated successfully. Jan 13 21:24:51.274702 containerd[1457]: time="2025-01-13T21:24:51.274620731Z" level=info msg="shim disconnected" id=f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1 namespace=k8s.io Jan 13 21:24:51.274702 containerd[1457]: time="2025-01-13T21:24:51.274694713Z" level=warning msg="cleaning up after shim disconnected" id=f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1 namespace=k8s.io Jan 13 21:24:51.274702 containerd[1457]: time="2025-01-13T21:24:51.274706355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:51.283582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67-rootfs.mount: Deactivated successfully. Jan 13 21:24:51.287728 containerd[1457]: time="2025-01-13T21:24:51.287554971Z" level=info msg="shim disconnected" id=7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67 namespace=k8s.io Jan 13 21:24:51.287728 containerd[1457]: time="2025-01-13T21:24:51.287615286Z" level=warning msg="cleaning up after shim disconnected" id=7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67 namespace=k8s.io Jan 13 21:24:51.287728 containerd[1457]: time="2025-01-13T21:24:51.287628521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:51.294254 containerd[1457]: time="2025-01-13T21:24:51.294205300Z" level=info msg="StopContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" returns successfully" Jan 13 21:24:51.294830 containerd[1457]: time="2025-01-13T21:24:51.294806631Z" level=info msg="StopPodSandbox for \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\"" Jan 13 21:24:51.294898 containerd[1457]: time="2025-01-13T21:24:51.294850073Z" level=info msg="Container to stop \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.294898 containerd[1457]: time="2025-01-13T21:24:51.294867206Z" level=info msg="Container to stop \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.294898 containerd[1457]: time="2025-01-13T21:24:51.294879940Z" level=info msg="Container to stop \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.294898 containerd[1457]: time="2025-01-13T21:24:51.294891383Z" level=info msg="Container to stop \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.295042 containerd[1457]: time="2025-01-13T21:24:51.294903004Z" level=info msg="Container to stop \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:51.302382 systemd[1]: cri-containerd-d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758.scope: Deactivated successfully. Jan 13 21:24:51.304564 containerd[1457]: time="2025-01-13T21:24:51.304501002Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:24:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:24:51.317492 containerd[1457]: time="2025-01-13T21:24:51.317422156Z" level=info msg="TearDown network for sandbox \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\" successfully" Jan 13 21:24:51.317492 containerd[1457]: time="2025-01-13T21:24:51.317475819Z" level=info msg="StopPodSandbox for \"7fb0f2868f36cb81c277032e7a968a7bb83ee8b3cbad58e6f340b6f93e666b67\" returns successfully" Jan 13 21:24:51.329717 containerd[1457]: time="2025-01-13T21:24:51.329626850Z" level=info msg="shim disconnected" id=d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758 namespace=k8s.io Jan 13 21:24:51.329717 containerd[1457]: time="2025-01-13T21:24:51.329702324Z" level=warning msg="cleaning up after shim disconnected" id=d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758 namespace=k8s.io Jan 13 21:24:51.329717 containerd[1457]: time="2025-01-13T21:24:51.329713576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:51.346822 containerd[1457]: time="2025-01-13T21:24:51.346696989Z" level=info msg="TearDown network for sandbox \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" successfully" Jan 13 21:24:51.346822 containerd[1457]: time="2025-01-13T21:24:51.346735383Z" level=info msg="StopPodSandbox for \"d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758\" returns successfully" Jan 13 21:24:51.472792 kubelet[2598]: I0113 21:24:51.472733 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-bpf-maps\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.472792 kubelet[2598]: I0113 21:24:51.472803 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-cilium-config-path\") pod \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\" (UID: \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472827 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-xtables-lock\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472848 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-lib-modules\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472868 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-kernel\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472891 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdvk8\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-kube-api-access-cdvk8\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472911 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-net\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473011 kubelet[2598]: I0113 21:24:51.472913 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.473243 kubelet[2598]: I0113 21:24:51.472933 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wld2\" (UniqueName: \"kubernetes.io/projected/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-kube-api-access-6wld2\") pod \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\" (UID: \"0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204\") " Jan 13 21:24:51.473243 kubelet[2598]: I0113 21:24:51.472956 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hostproc\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473243 kubelet[2598]: I0113 21:24:51.472965 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.473243 kubelet[2598]: I0113 21:24:51.472742 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.473243 kubelet[2598]: I0113 21:24:51.472976 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-run\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473404 kubelet[2598]: I0113 21:24:51.472994 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.473404 kubelet[2598]: I0113 21:24:51.473013 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-cgroup\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473404 kubelet[2598]: I0113 21:24:51.473019 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.473404 kubelet[2598]: I0113 21:24:51.473038 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cni-path\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473404 kubelet[2598]: I0113 21:24:51.473063 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-config-path\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473088 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hubble-tls\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473132 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-etc-cni-netd\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473158 2598 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bb7ed01-29cb-47b4-b660-8fa1076ee161-clustermesh-secrets\") pod \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\" (UID: \"3bb7ed01-29cb-47b4-b660-8fa1076ee161\") " Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473190 2598 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473204 2598 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473217 2598 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.473553 kubelet[2598]: I0113 21:24:51.473231 2598 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.473788 kubelet[2598]: I0113 21:24:51.473243 2598 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.476124 kubelet[2598]: I0113 21:24:51.475021 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.476962 kubelet[2598]: I0113 21:24:51.476931 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" (UID: "0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:24:51.477015 kubelet[2598]: I0113 21:24:51.476993 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.477052 kubelet[2598]: I0113 21:24:51.477019 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.477052 kubelet[2598]: I0113 21:24:51.477040 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.477141 kubelet[2598]: I0113 21:24:51.477060 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:51.477184 kubelet[2598]: I0113 21:24:51.477151 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-kube-api-access-cdvk8" (OuterVolumeSpecName: "kube-api-access-cdvk8") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "kube-api-access-cdvk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:51.478506 kubelet[2598]: I0113 21:24:51.478471 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-kube-api-access-6wld2" (OuterVolumeSpecName: "kube-api-access-6wld2") pod "0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" (UID: "0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204"). InnerVolumeSpecName "kube-api-access-6wld2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:51.478739 kubelet[2598]: I0113 21:24:51.478708 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb7ed01-29cb-47b4-b660-8fa1076ee161-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:24:51.478973 kubelet[2598]: I0113 21:24:51.478952 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:51.479135 kubelet[2598]: I0113 21:24:51.479099 2598 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bb7ed01-29cb-47b4-b660-8fa1076ee161" (UID: "3bb7ed01-29cb-47b4-b660-8fa1076ee161"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:24:51.573539 kubelet[2598]: I0113 21:24:51.573488 2598 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573539 kubelet[2598]: I0113 21:24:51.573533 2598 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bb7ed01-29cb-47b4-b660-8fa1076ee161-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573539 kubelet[2598]: I0113 21:24:51.573548 2598 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573562 2598 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573575 2598 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573586 2598 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573599 2598 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6wld2\" (UniqueName: \"kubernetes.io/projected/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204-kube-api-access-6wld2\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573611 2598 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cdvk8\" (UniqueName: \"kubernetes.io/projected/3bb7ed01-29cb-47b4-b660-8fa1076ee161-kube-api-access-cdvk8\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573622 2598 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573635 2598 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.573730 kubelet[2598]: I0113 21:24:51.573646 2598 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bb7ed01-29cb-47b4-b660-8fa1076ee161-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:24:51.811186 kubelet[2598]: I0113 21:24:51.811163 2598 scope.go:117] "RemoveContainer" containerID="f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1" Jan 13 21:24:51.812187 containerd[1457]: time="2025-01-13T21:24:51.812158275Z" level=info msg="RemoveContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\"" Jan 13 21:24:51.817714 systemd[1]: Removed slice kubepods-burstable-pod3bb7ed01_29cb_47b4_b660_8fa1076ee161.slice - libcontainer container kubepods-burstable-pod3bb7ed01_29cb_47b4_b660_8fa1076ee161.slice. Jan 13 21:24:51.817804 systemd[1]: kubepods-burstable-pod3bb7ed01_29cb_47b4_b660_8fa1076ee161.slice: Consumed 6.883s CPU time. Jan 13 21:24:51.819003 systemd[1]: Removed slice kubepods-besteffort-pod0f5bfa9f_6dfe_4d69_afbb_eb2a4f576204.slice - libcontainer container kubepods-besteffort-pod0f5bfa9f_6dfe_4d69_afbb_eb2a4f576204.slice. Jan 13 21:24:51.896454 containerd[1457]: time="2025-01-13T21:24:51.896385784Z" level=info msg="RemoveContainer for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" returns successfully" Jan 13 21:24:51.896774 kubelet[2598]: I0113 21:24:51.896735 2598 scope.go:117] "RemoveContainer" containerID="a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e" Jan 13 21:24:51.897927 containerd[1457]: time="2025-01-13T21:24:51.897884651Z" level=info msg="RemoveContainer for \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\"" Jan 13 21:24:51.970634 containerd[1457]: time="2025-01-13T21:24:51.970566808Z" level=info msg="RemoveContainer for \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\" returns successfully" Jan 13 21:24:51.970917 kubelet[2598]: I0113 21:24:51.970876 2598 scope.go:117] "RemoveContainer" containerID="5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd" Jan 13 21:24:51.972138 containerd[1457]: time="2025-01-13T21:24:51.972090302Z" level=info msg="RemoveContainer for \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\"" Jan 13 21:24:52.085202 containerd[1457]: time="2025-01-13T21:24:52.085071556Z" level=info msg="RemoveContainer for \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\" returns successfully" Jan 13 21:24:52.085403 kubelet[2598]: I0113 21:24:52.085371 2598 scope.go:117] "RemoveContainer" containerID="666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635" Jan 13 21:24:52.086641 containerd[1457]: time="2025-01-13T21:24:52.086593305Z" level=info msg="RemoveContainer for \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\"" Jan 13 21:24:52.186521 containerd[1457]: time="2025-01-13T21:24:52.186456635Z" level=info msg="RemoveContainer for \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\" returns successfully" Jan 13 21:24:52.187023 kubelet[2598]: I0113 21:24:52.186765 2598 scope.go:117] "RemoveContainer" containerID="0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005" Jan 13 21:24:52.188044 containerd[1457]: time="2025-01-13T21:24:52.188008151Z" level=info msg="RemoveContainer for \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\"" Jan 13 21:24:52.191917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758-rootfs.mount: Deactivated successfully. Jan 13 21:24:52.192047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5d35e2d57d7d77c4ceb9e6cf86fdd78ebf7328a70e6353db9737687c1624758-shm.mount: Deactivated successfully. Jan 13 21:24:52.192165 systemd[1]: var-lib-kubelet-pods-0f5bfa9f\x2d6dfe\x2d4d69\x2dafbb\x2deb2a4f576204-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6wld2.mount: Deactivated successfully. Jan 13 21:24:52.192279 systemd[1]: var-lib-kubelet-pods-3bb7ed01\x2d29cb\x2d47b4\x2db660\x2d8fa1076ee161-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdvk8.mount: Deactivated successfully. Jan 13 21:24:52.192376 systemd[1]: var-lib-kubelet-pods-3bb7ed01\x2d29cb\x2d47b4\x2db660\x2d8fa1076ee161-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:24:52.192471 systemd[1]: var-lib-kubelet-pods-3bb7ed01\x2d29cb\x2d47b4\x2db660\x2d8fa1076ee161-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:24:52.264797 containerd[1457]: time="2025-01-13T21:24:52.264736393Z" level=info msg="RemoveContainer for \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\" returns successfully" Jan 13 21:24:52.265049 kubelet[2598]: I0113 21:24:52.265014 2598 scope.go:117] "RemoveContainer" containerID="f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1" Jan 13 21:24:52.268986 containerd[1457]: time="2025-01-13T21:24:52.268932294Z" level=error msg="ContainerStatus for \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\": not found" Jan 13 21:24:52.269166 kubelet[2598]: E0113 21:24:52.269149 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\": not found" containerID="f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1" Jan 13 21:24:52.269258 kubelet[2598]: I0113 21:24:52.269243 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1"} err="failed to get container status \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f70999ce1c2c182a42a90e0942d0734932f50924be24dd1e51e091f1d6074fe1\": not found" Jan 13 21:24:52.269310 kubelet[2598]: I0113 21:24:52.269260 2598 scope.go:117] "RemoveContainer" containerID="a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e" Jan 13 21:24:52.269542 containerd[1457]: time="2025-01-13T21:24:52.269498967Z" level=error msg="ContainerStatus for \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\": not found" Jan 13 21:24:52.269706 kubelet[2598]: E0113 21:24:52.269683 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\": not found" containerID="a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e" Jan 13 21:24:52.269737 kubelet[2598]: I0113 21:24:52.269728 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e"} err="failed to get container status \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a662ac8a67f44da5715936324e0c4ad4a404cd82d46d3fe48794e23a96a04e9e\": not found" Jan 13 21:24:52.269778 kubelet[2598]: I0113 21:24:52.269745 2598 scope.go:117] "RemoveContainer" containerID="5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd" Jan 13 21:24:52.270010 containerd[1457]: time="2025-01-13T21:24:52.269948457Z" level=error msg="ContainerStatus for \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\": not found" Jan 13 21:24:52.270087 kubelet[2598]: E0113 21:24:52.270068 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\": not found" containerID="5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd" Jan 13 21:24:52.270157 kubelet[2598]: I0113 21:24:52.270105 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd"} err="failed to get container status \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b6ca373f8b9b9dbfc9bbb7d225d914ff7550b9f74c1741e1f9c8f63b3b0d2fd\": not found" Jan 13 21:24:52.270157 kubelet[2598]: I0113 21:24:52.270146 2598 scope.go:117] "RemoveContainer" containerID="666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635" Jan 13 21:24:52.270349 containerd[1457]: time="2025-01-13T21:24:52.270311240Z" level=error msg="ContainerStatus for \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\": not found" Jan 13 21:24:52.270482 kubelet[2598]: E0113 21:24:52.270461 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\": not found" containerID="666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635" Jan 13 21:24:52.270514 kubelet[2598]: I0113 21:24:52.270490 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635"} err="failed to get container status \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\": rpc error: code = NotFound desc = an error occurred when try to find container \"666be6f833fc5320260fed34744ac1870dd0f9a25fad00677d85f6b8f2d87635\": not found" Jan 13 21:24:52.270514 kubelet[2598]: I0113 21:24:52.270503 2598 scope.go:117] "RemoveContainer" containerID="0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005" Jan 13 21:24:52.270687 containerd[1457]: time="2025-01-13T21:24:52.270643636Z" level=error msg="ContainerStatus for \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\": not found" Jan 13 21:24:52.270791 kubelet[2598]: E0113 21:24:52.270776 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\": not found" containerID="0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005" Jan 13 21:24:52.270839 kubelet[2598]: I0113 21:24:52.270801 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005"} err="failed to get container status \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\": rpc error: code = NotFound desc = an error occurred when try to find container \"0555876a7f11289d6842a5c8b3538d1528d9e4465b5e60eff3eb381c2dad8005\": not found" Jan 13 21:24:52.270839 kubelet[2598]: I0113 21:24:52.270811 2598 scope.go:117] "RemoveContainer" containerID="b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8" Jan 13 21:24:52.271778 containerd[1457]: time="2025-01-13T21:24:52.271751254Z" level=info msg="RemoveContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\"" Jan 13 21:24:52.339194 containerd[1457]: time="2025-01-13T21:24:52.339069776Z" level=info msg="RemoveContainer for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" returns successfully" Jan 13 21:24:52.339435 kubelet[2598]: I0113 21:24:52.339410 2598 scope.go:117] "RemoveContainer" containerID="b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8" Jan 13 21:24:52.339773 containerd[1457]: time="2025-01-13T21:24:52.339703297Z" level=error msg="ContainerStatus for \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\": not found" Jan 13 21:24:52.339982 kubelet[2598]: E0113 21:24:52.339937 2598 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\": not found" containerID="b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8" Jan 13 21:24:52.340031 kubelet[2598]: I0113 21:24:52.339999 2598 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8"} err="failed to get container status \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b02fad0815b18915e51f9aba00e028b7dd0e1a1e70c3293b2238f8152a7f86a8\": not found" Jan 13 21:24:53.088914 kubelet[2598]: I0113 21:24:53.088854 2598 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" path="/var/lib/kubelet/pods/0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204/volumes" Jan 13 21:24:53.089566 kubelet[2598]: I0113 21:24:53.089540 2598 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" path="/var/lib/kubelet/pods/3bb7ed01-29cb-47b4-b660-8fa1076ee161/volumes" Jan 13 21:24:53.103436 sshd[4233]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:53.110784 systemd[1]: sshd@25-10.0.0.82:22-10.0.0.1:50770.service: Deactivated successfully. Jan 13 21:24:53.112954 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:24:53.114627 systemd-logind[1437]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:24:53.124445 systemd[1]: Started sshd@26-10.0.0.82:22-10.0.0.1:50784.service - OpenSSH per-connection server daemon (10.0.0.1:50784). Jan 13 21:24:53.125570 systemd-logind[1437]: Removed session 26. Jan 13 21:24:53.155213 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 50784 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:53.157104 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:53.161484 systemd-logind[1437]: New session 27 of user core. Jan 13 21:24:53.171259 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:24:53.698434 kubelet[2598]: E0113 21:24:53.698397 2598 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:24:53.951305 sshd[4396]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:53.959796 systemd[1]: sshd@26-10.0.0.82:22-10.0.0.1:50784.service: Deactivated successfully. Jan 13 21:24:53.962423 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:24:53.964546 systemd-logind[1437]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:24:53.979437 systemd[1]: Started sshd@27-10.0.0.82:22-10.0.0.1:50786.service - OpenSSH per-connection server daemon (10.0.0.1:50786). Jan 13 21:24:53.980652 systemd-logind[1437]: Removed session 27. Jan 13 21:24:54.012072 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 50786 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:54.013205 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:54.018574 kubelet[2598]: I0113 21:24:54.018539 2598 topology_manager.go:215] "Topology Admit Handler" podUID="330a13f7-950b-4f19-b138-c2e4281c609c" podNamespace="kube-system" podName="cilium-pzld5" Jan 13 21:24:54.018800 kubelet[2598]: E0113 21:24:54.018783 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" containerName="cilium-operator" Jan 13 21:24:54.019023 kubelet[2598]: E0113 21:24:54.019006 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="cilium-agent" Jan 13 21:24:54.019144 kubelet[2598]: E0113 21:24:54.019128 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="mount-cgroup" Jan 13 21:24:54.019222 kubelet[2598]: E0113 21:24:54.019209 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="mount-bpf-fs" Jan 13 21:24:54.019362 kubelet[2598]: E0113 21:24:54.019275 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="apply-sysctl-overwrites" Jan 13 21:24:54.019439 kubelet[2598]: E0113 21:24:54.019426 2598 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="clean-cilium-state" Jan 13 21:24:54.019521 kubelet[2598]: I0113 21:24:54.019508 2598 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5bfa9f-6dfe-4d69-afbb-eb2a4f576204" containerName="cilium-operator" Jan 13 21:24:54.019584 kubelet[2598]: I0113 21:24:54.019573 2598 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb7ed01-29cb-47b4-b660-8fa1076ee161" containerName="cilium-agent" Jan 13 21:24:54.022618 systemd-logind[1437]: New session 28 of user core. Jan 13 21:24:54.034119 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:24:54.045547 systemd[1]: Created slice kubepods-burstable-pod330a13f7_950b_4f19_b138_c2e4281c609c.slice - libcontainer container kubepods-burstable-pod330a13f7_950b_4f19_b138_c2e4281c609c.slice. Jan 13 21:24:54.090827 sshd[4410]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:54.106436 systemd[1]: sshd@27-10.0.0.82:22-10.0.0.1:50786.service: Deactivated successfully. Jan 13 21:24:54.108444 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:24:54.110163 systemd-logind[1437]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:24:54.126518 systemd[1]: Started sshd@28-10.0.0.82:22-10.0.0.1:50802.service - OpenSSH per-connection server daemon (10.0.0.1:50802). Jan 13 21:24:54.127578 systemd-logind[1437]: Removed session 28. Jan 13 21:24:54.154742 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 50802 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:54.156529 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:54.162027 systemd-logind[1437]: New session 29 of user core. Jan 13 21:24:54.173336 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:24:54.184236 kubelet[2598]: I0113 21:24:54.184186 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-cilium-cgroup\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184236 kubelet[2598]: I0113 21:24:54.184240 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/330a13f7-950b-4f19-b138-c2e4281c609c-cilium-ipsec-secrets\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184420 kubelet[2598]: I0113 21:24:54.184308 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-bpf-maps\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184420 kubelet[2598]: I0113 21:24:54.184357 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-cilium-run\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184420 kubelet[2598]: I0113 21:24:54.184382 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-host-proc-sys-net\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184420 kubelet[2598]: I0113 21:24:54.184422 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-etc-cni-netd\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184526 kubelet[2598]: I0113 21:24:54.184470 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-lib-modules\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184526 kubelet[2598]: I0113 21:24:54.184521 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/330a13f7-950b-4f19-b138-c2e4281c609c-cilium-config-path\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184567 kubelet[2598]: I0113 21:24:54.184546 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-host-proc-sys-kernel\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184591 kubelet[2598]: I0113 21:24:54.184568 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-xtables-lock\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184629 kubelet[2598]: I0113 21:24:54.184622 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/330a13f7-950b-4f19-b138-c2e4281c609c-clustermesh-secrets\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184718 kubelet[2598]: I0113 21:24:54.184661 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdjv\" (UniqueName: \"kubernetes.io/projected/330a13f7-950b-4f19-b138-c2e4281c609c-kube-api-access-jcdjv\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184758 kubelet[2598]: I0113 21:24:54.184728 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-cni-path\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184794 kubelet[2598]: I0113 21:24:54.184770 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/330a13f7-950b-4f19-b138-c2e4281c609c-hubble-tls\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.184823 kubelet[2598]: I0113 21:24:54.184816 2598 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/330a13f7-950b-4f19-b138-c2e4281c609c-hostproc\") pod \"cilium-pzld5\" (UID: \"330a13f7-950b-4f19-b138-c2e4281c609c\") " pod="kube-system/cilium-pzld5" Jan 13 21:24:54.348430 kubelet[2598]: E0113 21:24:54.348390 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:54.349216 containerd[1457]: time="2025-01-13T21:24:54.349143520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzld5,Uid:330a13f7-950b-4f19-b138-c2e4281c609c,Namespace:kube-system,Attempt:0,}" Jan 13 21:24:54.372795 containerd[1457]: time="2025-01-13T21:24:54.372676455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:54.372795 containerd[1457]: time="2025-01-13T21:24:54.372769092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:54.372795 containerd[1457]: time="2025-01-13T21:24:54.372783449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:54.372933 containerd[1457]: time="2025-01-13T21:24:54.372878821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:54.394269 systemd[1]: Started cri-containerd-b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9.scope - libcontainer container b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9. Jan 13 21:24:54.417783 containerd[1457]: time="2025-01-13T21:24:54.417739088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzld5,Uid:330a13f7-950b-4f19-b138-c2e4281c609c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\"" Jan 13 21:24:54.418622 kubelet[2598]: E0113 21:24:54.418596 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:54.421733 containerd[1457]: time="2025-01-13T21:24:54.421686297Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:24:54.435485 containerd[1457]: time="2025-01-13T21:24:54.435437139Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b\"" Jan 13 21:24:54.435976 containerd[1457]: time="2025-01-13T21:24:54.435930050Z" level=info msg="StartContainer for \"16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b\"" Jan 13 21:24:54.463239 systemd[1]: Started cri-containerd-16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b.scope - libcontainer container 16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b. Jan 13 21:24:54.489250 containerd[1457]: time="2025-01-13T21:24:54.489201952Z" level=info msg="StartContainer for \"16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b\" returns successfully" Jan 13 21:24:54.498413 systemd[1]: cri-containerd-16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b.scope: Deactivated successfully. Jan 13 21:24:54.533156 containerd[1457]: time="2025-01-13T21:24:54.533054454Z" level=info msg="shim disconnected" id=16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b namespace=k8s.io Jan 13 21:24:54.533156 containerd[1457]: time="2025-01-13T21:24:54.533147643Z" level=warning msg="cleaning up after shim disconnected" id=16a869d8393f10cd8cbb388312c6cc95e85516fb40606197274484095958f79b namespace=k8s.io Jan 13 21:24:54.533156 containerd[1457]: time="2025-01-13T21:24:54.533162050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:54.821363 kubelet[2598]: E0113 21:24:54.821336 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:54.825155 containerd[1457]: time="2025-01-13T21:24:54.823128758Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:24:54.837668 containerd[1457]: time="2025-01-13T21:24:54.837612620Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959\"" Jan 13 21:24:54.838262 containerd[1457]: time="2025-01-13T21:24:54.838217075Z" level=info msg="StartContainer for \"c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959\"" Jan 13 21:24:54.873266 systemd[1]: Started cri-containerd-c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959.scope - libcontainer container c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959. Jan 13 21:24:54.898252 containerd[1457]: time="2025-01-13T21:24:54.898203761Z" level=info msg="StartContainer for \"c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959\" returns successfully" Jan 13 21:24:54.906710 systemd[1]: cri-containerd-c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959.scope: Deactivated successfully. Jan 13 21:24:54.929746 containerd[1457]: time="2025-01-13T21:24:54.929671961Z" level=info msg="shim disconnected" id=c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959 namespace=k8s.io Jan 13 21:24:54.929746 containerd[1457]: time="2025-01-13T21:24:54.929741734Z" level=warning msg="cleaning up after shim disconnected" id=c95b3d5881566190ced7d0296d76ebbba75ba7d13f659f7eefe03101b6a1a959 namespace=k8s.io Jan 13 21:24:54.929746 containerd[1457]: time="2025-01-13T21:24:54.929752464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:55.289394 kubelet[2598]: I0113 21:24:55.289364 2598 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:24:55Z","lastTransitionTime":"2025-01-13T21:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:24:55.825093 kubelet[2598]: E0113 21:24:55.825062 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:55.827367 containerd[1457]: time="2025-01-13T21:24:55.827309784Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:24:55.847739 containerd[1457]: time="2025-01-13T21:24:55.847672035Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb\"" Jan 13 21:24:55.848346 containerd[1457]: time="2025-01-13T21:24:55.848321605Z" level=info msg="StartContainer for \"c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb\"" Jan 13 21:24:55.888321 systemd[1]: Started cri-containerd-c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb.scope - libcontainer container c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb. Jan 13 21:24:55.918944 systemd[1]: cri-containerd-c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb.scope: Deactivated successfully. Jan 13 21:24:55.997894 containerd[1457]: time="2025-01-13T21:24:55.997783917Z" level=info msg="StartContainer for \"c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb\" returns successfully" Jan 13 21:24:56.145802 containerd[1457]: time="2025-01-13T21:24:56.145663834Z" level=info msg="shim disconnected" id=c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb namespace=k8s.io Jan 13 21:24:56.145802 containerd[1457]: time="2025-01-13T21:24:56.145722186Z" level=warning msg="cleaning up after shim disconnected" id=c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb namespace=k8s.io Jan 13 21:24:56.145802 containerd[1457]: time="2025-01-13T21:24:56.145730271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:56.290952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c77b9afbe7e9953e9df59fa4978f9a5247d18b0e957103e7873c7e720e3793eb-rootfs.mount: Deactivated successfully. Jan 13 21:24:56.828964 kubelet[2598]: E0113 21:24:56.828928 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:56.830641 containerd[1457]: time="2025-01-13T21:24:56.830593153Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:24:56.958821 containerd[1457]: time="2025-01-13T21:24:56.958778569Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e\"" Jan 13 21:24:56.959291 containerd[1457]: time="2025-01-13T21:24:56.959271810Z" level=info msg="StartContainer for \"c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e\"" Jan 13 21:24:56.984256 systemd[1]: Started cri-containerd-c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e.scope - libcontainer container c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e. Jan 13 21:24:57.007377 systemd[1]: cri-containerd-c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e.scope: Deactivated successfully. Jan 13 21:24:57.038849 containerd[1457]: time="2025-01-13T21:24:57.038779961Z" level=info msg="StartContainer for \"c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e\" returns successfully" Jan 13 21:24:57.060482 containerd[1457]: time="2025-01-13T21:24:57.060419344Z" level=info msg="shim disconnected" id=c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e namespace=k8s.io Jan 13 21:24:57.060482 containerd[1457]: time="2025-01-13T21:24:57.060465433Z" level=warning msg="cleaning up after shim disconnected" id=c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e namespace=k8s.io Jan 13 21:24:57.060482 containerd[1457]: time="2025-01-13T21:24:57.060473798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:57.086505 kubelet[2598]: E0113 21:24:57.086387 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:57.291022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8f34acf1223a62e17f043634d2dea6a1212700f0f82704b229cdaa3a9dbe21e-rootfs.mount: Deactivated successfully. Jan 13 21:24:57.832504 kubelet[2598]: E0113 21:24:57.832475 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:57.835151 containerd[1457]: time="2025-01-13T21:24:57.835098730Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:24:57.849297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455609244.mount: Deactivated successfully. Jan 13 21:24:57.851379 containerd[1457]: time="2025-01-13T21:24:57.851329277Z" level=info msg="CreateContainer within sandbox \"b3bff16f9da9457038a618ebe07b340d76df0bbf3c21e535f4bbaa7f1c6d3dd9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25\"" Jan 13 21:24:57.851923 containerd[1457]: time="2025-01-13T21:24:57.851888173Z" level=info msg="StartContainer for \"4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25\"" Jan 13 21:24:57.879259 systemd[1]: Started cri-containerd-4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25.scope - libcontainer container 4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25. Jan 13 21:24:57.908359 containerd[1457]: time="2025-01-13T21:24:57.908318794Z" level=info msg="StartContainer for \"4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25\" returns successfully" Jan 13 21:24:58.330146 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:24:58.849385 kubelet[2598]: E0113 21:24:58.849343 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:24:58.861126 kubelet[2598]: I0113 21:24:58.860840 2598 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pzld5" podStartSLOduration=4.860805358 podStartE2EDuration="4.860805358s" podCreationTimestamp="2025-01-13 21:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:58.860645904 +0000 UTC m=+85.860661920" watchObservedRunningTime="2025-01-13 21:24:58.860805358 +0000 UTC m=+85.860821354" Jan 13 21:25:00.351217 kubelet[2598]: E0113 21:25:00.350377 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:00.471979 systemd[1]: run-containerd-runc-k8s.io-4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25-runc.QHuYax.mount: Deactivated successfully. Jan 13 21:25:01.356350 systemd-networkd[1394]: lxc_health: Link UP Jan 13 21:25:01.362375 systemd-networkd[1394]: lxc_health: Gained carrier Jan 13 21:25:02.350905 kubelet[2598]: E0113 21:25:02.350675 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:02.575702 systemd[1]: run-containerd-runc-k8s.io-4376229d65471c58ef02750a4c5ede599d5dab42de879a256dcc070c67617e25-runc.uW17Sx.mount: Deactivated successfully. Jan 13 21:25:02.624387 kubelet[2598]: E0113 21:25:02.624281 2598 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:33424->127.0.0.1:44099: write tcp 127.0.0.1:33424->127.0.0.1:44099: write: broken pipe Jan 13 21:25:02.855617 kubelet[2598]: E0113 21:25:02.855582 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:03.228439 systemd-networkd[1394]: lxc_health: Gained IPv6LL Jan 13 21:25:04.086661 kubelet[2598]: E0113 21:25:04.086595 2598 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:08.962872 sshd[4418]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:08.966598 systemd[1]: sshd@28-10.0.0.82:22-10.0.0.1:50802.service: Deactivated successfully. Jan 13 21:25:08.968629 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:25:08.969283 systemd-logind[1437]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:25:08.970220 systemd-logind[1437]: Removed session 29.