Mar 17 17:45:26.910295 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:45:26.910318 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:45:26.910330 kernel: BIOS-provided physical RAM map: Mar 17 17:45:26.910337 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:45:26.910343 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:45:26.910350 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:45:26.910357 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:45:26.910375 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:45:26.910391 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:45:26.910415 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:45:26.910431 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:45:26.910441 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:45:26.910462 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:45:26.910470 kernel: NX (Execute Disable) protection: active Mar 17 17:45:26.910478 kernel: APIC: Static calls initialized Mar 17 17:45:26.910491 kernel: SMBIOS 2.8 present. Mar 17 17:45:26.910498 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:45:26.910505 kernel: Hypervisor detected: KVM Mar 17 17:45:26.910521 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:45:26.910529 kernel: kvm-clock: using sched offset of 2930661371 cycles Mar 17 17:45:26.910537 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:45:26.910544 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:45:26.910552 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:45:26.910559 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:45:26.910567 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:45:26.910577 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:45:26.910584 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:45:26.910591 kernel: Using GB pages for direct mapping Mar 17 17:45:26.910599 kernel: ACPI: Early table checksum verification disabled Mar 17 17:45:26.910606 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:45:26.910613 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910621 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910628 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910635 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:45:26.910645 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910652 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910659 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910667 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:45:26.910674 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:45:26.910681 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:45:26.910785 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:45:26.910797 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:45:26.910807 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:45:26.910814 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:45:26.910822 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:45:26.910829 kernel: No NUMA configuration found Mar 17 17:45:26.910837 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:45:26.910844 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:45:26.910855 kernel: Zone ranges: Mar 17 17:45:26.910862 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:45:26.910870 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:45:26.910877 kernel: Normal empty Mar 17 17:45:26.910884 kernel: Movable zone start for each node Mar 17 17:45:26.910892 kernel: Early memory node ranges Mar 17 17:45:26.910899 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:45:26.910907 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:45:26.910914 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:45:26.910924 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:45:26.910934 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:45:26.910942 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:45:26.910949 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:45:26.910956 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:45:26.910964 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:45:26.910971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:45:26.910979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:45:26.910986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:45:26.910994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:45:26.911004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:45:26.911011 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:45:26.911018 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:45:26.911026 kernel: TSC deadline timer available Mar 17 17:45:26.911033 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:45:26.911041 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:45:26.911048 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:45:26.911058 kernel: kvm-guest: setup PV sched yield Mar 17 17:45:26.911065 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:45:26.911075 kernel: Booting paravirtualized kernel on KVM Mar 17 17:45:26.911083 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:45:26.911091 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:45:26.911098 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:45:26.911106 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:45:26.911113 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:45:26.911120 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:45:26.911128 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:45:26.911136 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:45:26.911147 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:45:26.911154 kernel: random: crng init done Mar 17 17:45:26.911162 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:45:26.911169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:45:26.911177 kernel: Fallback order for Node 0: 0 Mar 17 17:45:26.911184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:45:26.911192 kernel: Policy zone: DMA32 Mar 17 17:45:26.911199 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:45:26.911209 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 138948K reserved, 0K cma-reserved) Mar 17 17:45:26.911217 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:45:26.911225 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:45:26.911232 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:45:26.911239 kernel: Dynamic Preempt: voluntary Mar 17 17:45:26.911247 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:45:26.911255 kernel: rcu: RCU event tracing is enabled. Mar 17 17:45:26.911263 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:45:26.911270 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:45:26.911281 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:45:26.911288 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:45:26.911298 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:45:26.911305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:45:26.911313 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:45:26.911320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:45:26.911328 kernel: Console: colour VGA+ 80x25 Mar 17 17:45:26.911335 kernel: printk: console [ttyS0] enabled Mar 17 17:45:26.911342 kernel: ACPI: Core revision 20230628 Mar 17 17:45:26.911352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:45:26.911360 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:45:26.911367 kernel: x2apic enabled Mar 17 17:45:26.911375 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:45:26.911382 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:45:26.911390 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:45:26.911398 kernel: kvm-guest: setup PV IPIs Mar 17 17:45:26.911415 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:45:26.911423 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:45:26.911431 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:45:26.911439 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:45:26.911447 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:45:26.911457 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:45:26.911465 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:45:26.911473 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:45:26.911481 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:45:26.911488 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:45:26.911499 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:45:26.911508 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:45:26.911524 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:45:26.911535 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:45:26.911549 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:45:26.911566 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:45:26.911593 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:45:26.911615 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:45:26.911658 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:45:26.911687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:45:26.911715 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:45:26.911723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:45:26.911731 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:45:26.911739 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:45:26.911747 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:45:26.911755 kernel: landlock: Up and running. Mar 17 17:45:26.911762 kernel: SELinux: Initializing. Mar 17 17:45:26.911775 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:45:26.911783 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:45:26.911791 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:45:26.911799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:45:26.911807 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:45:26.911815 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:45:26.911827 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:45:26.911835 kernel: ... version: 0 Mar 17 17:45:26.911842 kernel: ... bit width: 48 Mar 17 17:45:26.911853 kernel: ... generic registers: 6 Mar 17 17:45:26.911860 kernel: ... value mask: 0000ffffffffffff Mar 17 17:45:26.911868 kernel: ... max period: 00007fffffffffff Mar 17 17:45:26.911876 kernel: ... fixed-purpose events: 0 Mar 17 17:45:26.911884 kernel: ... event mask: 000000000000003f Mar 17 17:45:26.911891 kernel: signal: max sigframe size: 1776 Mar 17 17:45:26.911900 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:45:26.911915 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:45:26.911942 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:45:26.911969 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:45:26.911980 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:45:26.911990 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:45:26.912000 kernel: smpboot: Max logical packages: 1 Mar 17 17:45:26.912010 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:45:26.912020 kernel: devtmpfs: initialized Mar 17 17:45:26.912028 kernel: x86/mm: Memory block size: 128MB Mar 17 17:45:26.912036 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:45:26.912044 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:45:26.912056 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:45:26.912064 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:45:26.912072 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:45:26.912080 kernel: audit: type=2000 audit(1742233525.626:1): state=initialized audit_enabled=0 res=1 Mar 17 17:45:26.912087 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:45:26.912095 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:45:26.912103 kernel: cpuidle: using governor menu Mar 17 17:45:26.912111 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:45:26.912119 kernel: dca service started, version 1.12.1 Mar 17 17:45:26.912129 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:45:26.912137 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:45:26.912145 kernel: PCI: Using configuration type 1 for base access Mar 17 17:45:26.912153 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:45:26.912161 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:45:26.912169 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:45:26.912177 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:45:26.912184 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:45:26.912192 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:45:26.912203 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:45:26.912210 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:45:26.912218 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:45:26.912226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:45:26.912234 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:45:26.912242 kernel: ACPI: Interpreter enabled Mar 17 17:45:26.912249 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:45:26.912257 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:45:26.912265 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:45:26.912275 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:45:26.912283 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:45:26.912291 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:45:26.912505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:45:26.912670 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:45:26.912820 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:45:26.912831 kernel: PCI host bridge to bus 0000:00 Mar 17 17:45:26.912991 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:45:26.913132 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:45:26.913333 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:45:26.913463 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:45:26.913595 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:45:26.913748 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:45:26.913872 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:45:26.914027 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:45:26.914206 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:45:26.914352 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:45:26.914510 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:45:26.914656 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:45:26.914817 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:45:26.914985 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:45:26.915126 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:45:26.915257 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:45:26.915387 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:45:26.915536 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:45:26.915668 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:45:26.915822 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:45:26.915953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:45:26.916098 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:45:26.916229 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:45:26.916360 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:45:26.916494 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:45:26.916634 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:45:26.916842 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:45:26.916987 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:45:26.917116 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 16601 usecs Mar 17 17:45:26.917253 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:45:26.917381 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:45:26.917510 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:45:26.917659 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:45:26.917853 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:45:26.917875 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:45:26.917883 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:45:26.917891 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:45:26.917899 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:45:26.917907 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:45:26.917915 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:45:26.917923 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:45:26.917931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:45:26.917939 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:45:26.917949 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:45:26.917957 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:45:26.917965 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:45:26.917973 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:45:26.917981 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:45:26.917989 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:45:26.917997 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:45:26.918005 kernel: iommu: Default domain type: Translated Mar 17 17:45:26.918012 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:45:26.918023 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:45:26.918031 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:45:26.918038 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:45:26.918046 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:45:26.918180 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:45:26.918310 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:45:26.918439 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:45:26.918449 kernel: vgaarb: loaded Mar 17 17:45:26.918461 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:45:26.918469 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:45:26.918477 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:45:26.918486 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:45:26.918494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:45:26.918504 kernel: pnp: PnP ACPI init Mar 17 17:45:26.918669 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:45:26.918681 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:45:26.918767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:45:26.918775 kernel: NET: Registered PF_INET protocol family Mar 17 17:45:26.918783 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:45:26.918791 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:45:26.918799 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:45:26.918808 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:45:26.918816 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:45:26.918824 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:45:26.918832 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:45:26.918843 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:45:26.918851 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:45:26.918859 kernel: NET: Registered PF_XDP protocol family Mar 17 17:45:26.918983 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:45:26.919101 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:45:26.919217 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:45:26.919333 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:45:26.919449 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:45:26.919588 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:45:26.919599 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:45:26.919607 kernel: Initialise system trusted keyrings Mar 17 17:45:26.919615 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:45:26.919623 kernel: Key type asymmetric registered Mar 17 17:45:26.919631 kernel: Asymmetric key parser 'x509' registered Mar 17 17:45:26.919639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:45:26.919647 kernel: io scheduler mq-deadline registered Mar 17 17:45:26.919654 kernel: io scheduler kyber registered Mar 17 17:45:26.919666 kernel: io scheduler bfq registered Mar 17 17:45:26.919674 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:45:26.919682 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:45:26.919702 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:45:26.919710 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:45:26.919718 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:45:26.919728 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:45:26.919739 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:45:26.919749 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:45:26.919762 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:45:26.919923 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:45:26.919937 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:45:26.920056 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:45:26.920177 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:45:26 UTC (1742233526) Mar 17 17:45:26.920298 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:45:26.920309 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:45:26.920316 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:45:26.920328 kernel: Segment Routing with IPv6 Mar 17 17:45:26.920337 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:45:26.920344 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:45:26.920352 kernel: Key type dns_resolver registered Mar 17 17:45:26.920360 kernel: IPI shorthand broadcast: enabled Mar 17 17:45:26.920368 kernel: sched_clock: Marking stable (874003216, 108872581)->(1217985502, -235109705) Mar 17 17:45:26.920376 kernel: registered taskstats version 1 Mar 17 17:45:26.920384 kernel: Loading compiled-in X.509 certificates Mar 17 17:45:26.920392 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:45:26.920402 kernel: Key type .fscrypt registered Mar 17 17:45:26.920410 kernel: Key type fscrypt-provisioning registered Mar 17 17:45:26.920419 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:45:26.920427 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:45:26.920435 kernel: ima: No architecture policies found Mar 17 17:45:26.920443 kernel: clk: Disabling unused clocks Mar 17 17:45:26.920451 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:45:26.920461 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:45:26.920469 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:45:26.920482 kernel: Run /init as init process Mar 17 17:45:26.920490 kernel: with arguments: Mar 17 17:45:26.920498 kernel: /init Mar 17 17:45:26.920506 kernel: with environment: Mar 17 17:45:26.920525 kernel: HOME=/ Mar 17 17:45:26.920533 kernel: TERM=linux Mar 17 17:45:26.920542 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:45:26.920552 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:45:26.920563 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:45:26.920575 systemd[1]: Detected virtualization kvm. Mar 17 17:45:26.920584 systemd[1]: Detected architecture x86-64. Mar 17 17:45:26.920592 systemd[1]: Running in initrd. Mar 17 17:45:26.920600 systemd[1]: No hostname configured, using default hostname. Mar 17 17:45:26.920609 systemd[1]: Hostname set to . Mar 17 17:45:26.920618 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:45:26.920626 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:45:26.920637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:45:26.920658 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:45:26.920670 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:45:26.920678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:45:26.920687 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:45:26.920712 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:45:26.920723 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:45:26.920731 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:45:26.920740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:45:26.920749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:45:26.920758 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:45:26.920766 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:45:26.920775 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:45:26.920786 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:45:26.920795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:45:26.920804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:45:26.920813 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:45:26.920822 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:45:26.920830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:45:26.920839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:45:26.920848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:45:26.920857 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:45:26.920868 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:45:26.920877 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:45:26.920886 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:45:26.920894 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:45:26.920903 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:45:26.920911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:45:26.920920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:45:26.920929 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:45:26.920940 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:45:26.920952 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:45:26.920996 systemd-journald[192]: Collecting audit messages is disabled. Mar 17 17:45:26.921018 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:45:26.921027 systemd-journald[192]: Journal started Mar 17 17:45:26.921049 systemd-journald[192]: Runtime Journal (/run/log/journal/250b58e091ec49f2be743761c80878a4) is 6M, max 48.4M, 42.3M free. Mar 17 17:45:26.911983 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:45:26.949138 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:45:26.949159 kernel: Bridge firewalling registered Mar 17 17:45:26.940080 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:45:26.964927 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:45:26.965578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:45:26.968478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:45:26.971052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:45:26.988974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:45:26.992425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:45:26.995444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:45:26.999392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:45:27.007790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:45:27.011638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:45:27.013576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:45:27.016643 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:45:27.032840 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:45:27.035553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:45:27.045269 dracut-cmdline[230]: dracut-dracut-053 Mar 17 17:45:27.048358 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:45:27.088408 systemd-resolved[231]: Positive Trust Anchors: Mar 17 17:45:27.088424 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:45:27.088454 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:45:27.100500 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 17 17:45:27.102644 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:45:27.102828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:45:27.135735 kernel: SCSI subsystem initialized Mar 17 17:45:27.145724 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:45:27.155718 kernel: iscsi: registered transport (tcp) Mar 17 17:45:27.177725 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:45:27.177761 kernel: QLogic iSCSI HBA Driver Mar 17 17:45:27.224760 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:45:27.237880 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:45:27.264957 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:45:27.265029 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:45:27.266040 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:45:27.309736 kernel: raid6: avx2x4 gen() 23100 MB/s Mar 17 17:45:27.326751 kernel: raid6: avx2x2 gen() 30304 MB/s Mar 17 17:45:27.343857 kernel: raid6: avx2x1 gen() 25719 MB/s Mar 17 17:45:27.343936 kernel: raid6: using algorithm avx2x2 gen() 30304 MB/s Mar 17 17:45:27.374182 kernel: raid6: .... xor() 19489 MB/s, rmw enabled Mar 17 17:45:27.374265 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:45:27.396776 kernel: xor: automatically using best checksumming function avx Mar 17 17:45:27.546729 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:45:27.559940 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:45:27.576882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:45:27.592911 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 17 17:45:27.598717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:45:27.608882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:45:27.624083 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 17 17:45:27.662534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:45:27.675856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:45:27.751285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:45:27.761962 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:45:27.779532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:45:27.782275 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:45:27.787093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:45:27.789619 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:45:27.795123 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:45:27.806785 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:45:27.807313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:45:27.807341 kernel: GPT:9289727 != 19775487 Mar 17 17:45:27.807358 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:45:27.807373 kernel: GPT:9289727 != 19775487 Mar 17 17:45:27.807386 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:45:27.807401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:45:27.798892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:45:27.814898 kernel: libata version 3.00 loaded. Mar 17 17:45:27.817714 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:45:27.817904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:45:27.825993 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:45:27.849511 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:45:27.849536 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:45:27.849792 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:45:27.849999 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:45:27.850015 kernel: AES CTR mode by8 optimization enabled Mar 17 17:45:27.850029 kernel: scsi host0: ahci Mar 17 17:45:27.850271 kernel: scsi host1: ahci Mar 17 17:45:27.850500 kernel: scsi host2: ahci Mar 17 17:45:27.850736 kernel: scsi host3: ahci Mar 17 17:45:27.850953 kernel: scsi host4: ahci Mar 17 17:45:27.851167 kernel: scsi host5: ahci Mar 17 17:45:27.851430 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Mar 17 17:45:27.851451 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Mar 17 17:45:27.851468 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Mar 17 17:45:27.851482 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Mar 17 17:45:27.851510 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Mar 17 17:45:27.851524 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Mar 17 17:45:27.857091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:45:27.863386 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Mar 17 17:45:27.857174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:45:27.859389 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:45:27.869559 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (473) Mar 17 17:45:27.865819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:45:27.865908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:45:27.870874 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:45:27.878839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:45:27.902882 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:45:27.926852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:45:27.938387 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:45:27.955810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:45:27.965299 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:45:27.968660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:45:27.985031 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:45:27.987785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:45:28.000189 disk-uuid[557]: Primary Header is updated. Mar 17 17:45:28.000189 disk-uuid[557]: Secondary Entries is updated. Mar 17 17:45:28.000189 disk-uuid[557]: Secondary Header is updated. Mar 17 17:45:28.004733 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:45:28.007908 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:45:28.011880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:45:28.162738 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:45:28.162828 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:45:28.163759 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:45:28.164731 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:45:28.165742 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:45:28.165773 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:45:28.166734 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:45:28.167737 kernel: ata3.00: applying bridge limits Mar 17 17:45:28.168728 kernel: ata3.00: configured for UDMA/100 Mar 17 17:45:28.170732 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:45:28.217772 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:45:28.230771 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:45:28.230802 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:45:29.010768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:45:29.011839 disk-uuid[563]: The operation has completed successfully. Mar 17 17:45:29.048414 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:45:29.048587 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:45:29.107041 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:45:29.111624 sh[594]: Success Mar 17 17:45:29.126806 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:45:29.171773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:45:29.187058 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:45:29.189953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:45:29.204679 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:45:29.204767 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:45:29.204799 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:45:29.205737 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:45:29.207162 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:45:29.212268 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:45:29.214890 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:45:29.227965 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:45:29.230960 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:45:29.243907 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:45:29.243946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:45:29.244078 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:45:29.247733 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:45:29.259603 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:45:29.261418 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:45:29.272539 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:45:29.281970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:45:29.341594 ignition[695]: Ignition 2.20.0 Mar 17 17:45:29.341608 ignition[695]: Stage: fetch-offline Mar 17 17:45:29.341648 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:29.341659 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:29.341804 ignition[695]: parsed url from cmdline: "" Mar 17 17:45:29.341808 ignition[695]: no config URL provided Mar 17 17:45:29.341813 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:45:29.341823 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:45:29.341849 ignition[695]: op(1): [started] loading QEMU firmware config module Mar 17 17:45:29.341855 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:45:29.352438 ignition[695]: op(1): [finished] loading QEMU firmware config module Mar 17 17:45:29.369233 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:45:29.380886 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:45:29.399270 ignition[695]: parsing config with SHA512: 0ae058c2013f26f782609e2b8009bc663eff4dd7629f2dce93d99a74e70b9a981f695b43b4e2506420839a04d2720552c6ac085700236c75717b2f3cf5eb55a3 Mar 17 17:45:29.403974 unknown[695]: fetched base config from "system" Mar 17 17:45:29.404210 unknown[695]: fetched user config from "qemu" Mar 17 17:45:29.404798 ignition[695]: fetch-offline: fetch-offline passed Mar 17 17:45:29.404894 ignition[695]: Ignition finished successfully Mar 17 17:45:29.407077 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:45:29.424806 systemd-networkd[784]: lo: Link UP Mar 17 17:45:29.424818 systemd-networkd[784]: lo: Gained carrier Mar 17 17:45:29.433223 systemd-networkd[784]: Enumeration completed Mar 17 17:45:29.433495 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:45:29.436189 systemd[1]: Reached target network.target - Network. Mar 17 17:45:29.436285 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:45:29.437899 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:45:29.439671 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:45:29.441889 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:45:29.464489 systemd-networkd[784]: eth0: Link UP Mar 17 17:45:29.464501 systemd-networkd[784]: eth0: Gained carrier Mar 17 17:45:29.464512 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:45:29.479848 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:45:29.482940 ignition[789]: Ignition 2.20.0 Mar 17 17:45:29.482952 ignition[789]: Stage: kargs Mar 17 17:45:29.483134 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:29.483146 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:29.487224 ignition[789]: kargs: kargs passed Mar 17 17:45:29.487278 ignition[789]: Ignition finished successfully Mar 17 17:45:29.491976 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:45:29.500988 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:45:29.515798 ignition[799]: Ignition 2.20.0 Mar 17 17:45:29.515811 ignition[799]: Stage: disks Mar 17 17:45:29.515982 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:29.515994 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:29.530216 ignition[799]: disks: disks passed Mar 17 17:45:29.530283 ignition[799]: Ignition finished successfully Mar 17 17:45:29.533686 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:45:29.534037 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:45:29.535744 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:45:29.537833 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:45:29.541180 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:45:29.543175 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:45:29.554951 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:45:29.568844 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.15 Mar 17 17:45:29.568861 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Mar 17 17:45:29.569951 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:45:29.622411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:45:30.211836 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:45:30.304726 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:45:30.305685 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:45:30.306365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:45:30.318805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:45:30.320824 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:45:30.321236 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:45:30.321291 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:45:30.334191 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Mar 17 17:45:30.334212 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:45:30.334224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:45:30.334235 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:45:30.334246 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:45:30.321322 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:45:30.329185 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:45:30.335321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:45:30.338241 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:45:30.374083 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:45:30.383608 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:45:30.389148 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:45:30.393246 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:45:30.487161 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:45:30.499912 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:45:30.502349 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:45:30.509716 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:45:30.528457 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:45:30.532541 ignition[930]: INFO : Ignition 2.20.0 Mar 17 17:45:30.532541 ignition[930]: INFO : Stage: mount Mar 17 17:45:30.534302 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:30.534302 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:30.534302 ignition[930]: INFO : mount: mount passed Mar 17 17:45:30.534302 ignition[930]: INFO : Ignition finished successfully Mar 17 17:45:30.540038 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:45:30.551812 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:45:31.204376 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:45:31.216991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:45:31.228742 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Mar 17 17:45:31.233356 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:45:31.233400 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:45:31.233422 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:45:31.236731 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:45:31.238786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:45:31.269670 ignition[961]: INFO : Ignition 2.20.0 Mar 17 17:45:31.269670 ignition[961]: INFO : Stage: files Mar 17 17:45:31.271821 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:31.271821 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:31.271821 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:45:31.275892 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:45:31.275892 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:45:31.275892 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:45:31.275892 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:45:31.281823 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:45:31.276258 unknown[961]: wrote ssh authorized keys file for user: core Mar 17 17:45:31.284400 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:45:31.284400 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 17:45:31.284410 systemd-networkd[784]: eth0: Gained IPv6LL Mar 17 17:45:31.364689 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:45:31.504604 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 17:45:31.504604 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:45:31.508680 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:45:31.971989 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:45:32.055373 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:45:32.055373 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:45:32.059433 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 17:45:32.454131 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:45:32.964849 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 17:45:32.964849 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:45:32.968753 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:45:32.994202 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:45:32.998768 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:45:33.000389 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:45:33.000389 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:45:33.000389 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:45:33.000389 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:45:33.000389 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:45:33.000389 ignition[961]: INFO : files: files passed Mar 17 17:45:33.000389 ignition[961]: INFO : Ignition finished successfully Mar 17 17:45:33.002180 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:45:33.011869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:45:33.013720 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:45:33.017304 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:45:33.017465 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:45:33.024578 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:45:33.027905 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:45:33.027905 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:45:33.031269 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:45:33.035275 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:45:33.038290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:45:33.046001 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:45:33.072426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:45:33.072575 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:45:33.075101 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:45:33.077242 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:45:33.079314 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:45:33.091038 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:45:33.104841 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:45:33.115891 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:45:33.128936 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:45:33.131716 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:45:33.134347 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:45:33.136517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:45:33.137758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:45:33.140657 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:45:33.142969 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:45:33.145089 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:45:33.147664 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:45:33.150356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:45:33.152986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:45:33.155743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:45:33.158801 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:45:33.161193 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:45:33.163535 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:45:33.165415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:45:33.166665 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:45:33.169252 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:45:33.171635 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:45:33.174268 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:45:33.175521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:45:33.178479 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:45:33.179703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:45:33.182273 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:45:33.183486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:45:33.186241 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:45:33.188166 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:45:33.189545 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:45:33.192349 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:45:33.194401 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:45:33.196295 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:45:33.197264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:45:33.199264 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:45:33.200264 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:45:33.202490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:45:33.203882 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:45:33.206528 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:45:33.207575 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:45:33.221886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:45:33.224778 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:45:33.227045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:45:33.228470 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:45:33.231401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:45:33.231606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:45:33.235977 ignition[1015]: INFO : Ignition 2.20.0 Mar 17 17:45:33.235977 ignition[1015]: INFO : Stage: umount Mar 17 17:45:33.235977 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:45:33.235977 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:45:33.240785 ignition[1015]: INFO : umount: umount passed Mar 17 17:45:33.240785 ignition[1015]: INFO : Ignition finished successfully Mar 17 17:45:33.244787 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:45:33.245965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:45:33.251776 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:45:33.253038 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:45:33.256959 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:45:33.258681 systemd[1]: Stopped target network.target - Network. Mar 17 17:45:33.260415 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:45:33.261343 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:45:33.263754 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:45:33.264688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:45:33.267030 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:45:33.267957 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:45:33.270046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:45:33.270104 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:45:33.273447 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:45:33.275635 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:45:33.284645 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:45:33.284834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:45:33.289867 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:45:33.290210 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:45:33.290348 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:45:33.294358 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:45:33.295219 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:45:33.295290 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:45:33.307793 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:45:33.309719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:45:33.309782 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:45:33.312027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:45:33.312080 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:45:33.315422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:45:33.315473 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:45:33.317502 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:45:33.317553 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:45:33.319799 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:45:33.320986 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:45:33.321066 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:45:33.339954 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:45:33.340143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:45:33.342570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:45:33.342636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:45:33.344637 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:45:33.344707 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:45:33.345599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:45:33.345671 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:45:33.350549 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:45:33.350604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:45:33.352026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:45:33.352079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:45:33.366849 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:45:33.367964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:45:33.368034 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:45:33.371473 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:45:33.371538 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:45:33.373986 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:45:33.374047 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:45:33.376314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:45:33.376385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:45:33.379940 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:45:33.380015 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:45:33.380444 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:45:33.380568 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:45:33.382268 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:45:33.382385 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:45:33.579792 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:45:33.579959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:45:33.582718 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:45:33.584753 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:45:33.584828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:45:33.595858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:45:33.606679 systemd[1]: Switching root. Mar 17 17:45:33.643658 systemd-journald[192]: Journal stopped Mar 17 17:45:35.221084 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 17 17:45:35.221157 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:45:35.221177 kernel: SELinux: policy capability open_perms=1 Mar 17 17:45:35.221192 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:45:35.221206 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:45:35.221218 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:45:35.221234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:45:35.221246 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:45:35.221258 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:45:35.221270 kernel: audit: type=1403 audit(1742233534.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:45:35.221299 systemd[1]: Successfully loaded SELinux policy in 43.559ms. Mar 17 17:45:35.221321 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.753ms. Mar 17 17:45:35.221340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:45:35.221353 systemd[1]: Detected virtualization kvm. Mar 17 17:45:35.221366 systemd[1]: Detected architecture x86-64. Mar 17 17:45:35.221379 systemd[1]: Detected first boot. Mar 17 17:45:35.221392 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:45:35.221405 zram_generator::config[1063]: No configuration found. Mar 17 17:45:35.221423 kernel: Guest personality initialized and is inactive Mar 17 17:45:35.221435 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:45:35.221449 kernel: Initialized host personality Mar 17 17:45:35.221461 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:45:35.221473 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:45:35.221486 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:45:35.221499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:45:35.221512 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:45:35.221525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:45:35.221538 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:45:35.221551 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:45:35.221566 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:45:35.221579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:45:35.221604 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:45:35.221617 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:45:35.221630 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:45:35.221643 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:45:35.221656 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:45:35.221669 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:45:35.221681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:45:35.221710 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:45:35.221724 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:45:35.221737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:45:35.221750 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:45:35.221763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:45:35.221775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:45:35.221788 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:45:35.221804 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:45:35.221817 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:45:35.221830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:45:35.221848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:45:35.221860 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:45:35.221873 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:45:35.221886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:45:35.221900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:45:35.221913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:45:35.221929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:45:35.221941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:45:35.221955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:45:35.221967 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:45:35.221980 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:45:35.221997 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:45:35.222010 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:45:35.222023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:35.222036 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:45:35.222052 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:45:35.222065 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:45:35.222080 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:45:35.222096 systemd[1]: Reached target machines.target - Containers. Mar 17 17:45:35.222112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:45:35.222128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:45:35.222144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:45:35.222159 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:45:35.222175 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:45:35.222194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:45:35.222211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:45:35.222224 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:45:35.222237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:45:35.222250 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:45:35.222262 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:45:35.222284 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:45:35.222298 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:45:35.222313 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:45:35.222326 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:45:35.222339 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:45:35.222351 kernel: loop: module loaded Mar 17 17:45:35.222363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:45:35.222375 kernel: fuse: init (API version 7.39) Mar 17 17:45:35.222387 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:45:35.222400 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:45:35.222412 kernel: ACPI: bus type drm_connector registered Mar 17 17:45:35.222427 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:45:35.222440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:45:35.222452 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:45:35.222465 systemd[1]: Stopped verity-setup.service. Mar 17 17:45:35.222478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:35.222495 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:45:35.222507 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:45:35.222520 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:45:35.222533 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:45:35.222564 systemd-journald[1135]: Collecting audit messages is disabled. Mar 17 17:45:35.222588 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:45:35.223073 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:45:35.223095 systemd-journald[1135]: Journal started Mar 17 17:45:35.223121 systemd-journald[1135]: Runtime Journal (/run/log/journal/250b58e091ec49f2be743761c80878a4) is 6M, max 48.4M, 42.3M free. Mar 17 17:45:34.865936 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:45:34.878236 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:45:34.878935 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:45:35.230369 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:45:35.231680 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:45:35.233761 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:45:35.235901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:45:35.236226 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:45:35.238174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:45:35.238482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:45:35.240384 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:45:35.241523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:45:35.243266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:45:35.243518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:45:35.245127 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:45:35.245378 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:45:35.246845 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:45:35.247062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:45:35.248874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:45:35.250820 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:45:35.252603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:45:35.254356 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:45:35.272188 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:45:35.278786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:45:35.281441 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:45:35.282764 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:45:35.282799 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:45:35.284908 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:45:35.287360 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:45:35.292039 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:45:35.293405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:45:35.295642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:45:35.300498 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:45:35.301775 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:45:35.303359 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:45:35.304682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:45:35.308461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:45:35.315087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:45:35.318882 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:45:35.328193 systemd-journald[1135]: Time spent on flushing to /var/log/journal/250b58e091ec49f2be743761c80878a4 is 35.816ms for 974 entries. Mar 17 17:45:35.328193 systemd-journald[1135]: System Journal (/var/log/journal/250b58e091ec49f2be743761c80878a4) is 8M, max 195.6M, 187.6M free. Mar 17 17:45:35.390446 systemd-journald[1135]: Received client request to flush runtime journal. Mar 17 17:45:35.390509 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 17:45:35.339738 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:45:35.345279 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:45:35.347410 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:45:35.349558 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:45:35.354686 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:45:35.373061 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:45:35.380164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:45:35.382201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:45:35.392125 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:45:35.393304 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 17 17:45:35.393322 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 17 17:45:35.394302 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:45:35.400808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:45:35.416046 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:45:35.420459 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:45:35.421732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:45:35.425844 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:45:35.454718 kernel: loop1: detected capacity change from 0 to 138176 Mar 17 17:45:35.455728 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:45:35.464868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:45:35.489895 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 17 17:45:35.489925 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 17 17:45:35.498061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:45:35.531730 kernel: loop2: detected capacity change from 0 to 147912 Mar 17 17:45:35.576736 kernel: loop3: detected capacity change from 0 to 218376 Mar 17 17:45:35.595748 kernel: loop4: detected capacity change from 0 to 138176 Mar 17 17:45:35.622721 kernel: loop5: detected capacity change from 0 to 147912 Mar 17 17:45:35.642798 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:45:35.643561 (sd-merge)[1212]: Merged extensions into '/usr'. Mar 17 17:45:35.648658 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:45:35.648829 systemd[1]: Reloading... Mar 17 17:45:35.751810 zram_generator::config[1243]: No configuration found. Mar 17 17:45:35.781782 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:45:35.906725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:45:36.000655 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:45:36.001512 systemd[1]: Reloading finished in 352 ms. Mar 17 17:45:36.023148 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:45:36.025206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:45:36.046578 systemd[1]: Starting ensure-sysext.service... Mar 17 17:45:36.048928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:45:36.061169 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:45:36.061185 systemd[1]: Reloading... Mar 17 17:45:36.081562 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:45:36.082442 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:45:36.083955 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:45:36.084464 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Mar 17 17:45:36.084633 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Mar 17 17:45:36.089752 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:45:36.089851 systemd-tmpfiles[1278]: Skipping /boot Mar 17 17:45:36.110130 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:45:36.110150 systemd-tmpfiles[1278]: Skipping /boot Mar 17 17:45:36.128772 zram_generator::config[1307]: No configuration found. Mar 17 17:45:36.257670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:45:36.325594 systemd[1]: Reloading finished in 264 ms. Mar 17 17:45:36.338971 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:45:36.360738 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:45:36.390091 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:45:36.393116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:45:36.395670 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:45:36.403785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:45:36.407329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:45:36.411159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:45:36.417539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:36.417800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:45:36.419987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:45:36.425047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:45:36.430034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:45:36.431758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:45:36.431915 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:45:36.444965 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:45:36.446544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:36.448972 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:45:36.451553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:45:36.452601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:45:36.453251 augenrules[1375]: No rules Mar 17 17:45:36.455223 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:45:36.455602 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:45:36.457897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:45:36.458258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:45:36.458609 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Mar 17 17:45:36.461682 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:45:36.462014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:45:36.477255 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:45:36.489820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:45:36.491840 systemd[1]: Finished ensure-sysext.service. Mar 17 17:45:36.493176 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:45:36.497129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:36.503916 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:45:36.507178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:45:36.511814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:45:36.516368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:45:36.522245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:45:36.526325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:45:36.527907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:45:36.527960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:45:36.531918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:45:36.539219 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:45:36.544575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:45:36.545987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:45:36.546030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:45:36.548581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:45:36.552819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:45:36.553135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:45:36.555298 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:45:36.555557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:45:36.557171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:45:36.557445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:45:36.561127 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:45:36.561445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:45:36.565719 augenrules[1395]: /sbin/augenrules: No change Mar 17 17:45:36.588784 augenrules[1441]: No rules Mar 17 17:45:36.590807 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:45:36.591165 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:45:36.594071 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:45:36.594171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:45:36.610301 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:45:36.632211 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:45:36.644721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1405) Mar 17 17:45:36.655103 systemd-resolved[1351]: Positive Trust Anchors: Mar 17 17:45:36.655541 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:45:36.655649 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:45:36.660677 systemd-resolved[1351]: Defaulting to hostname 'linux'. Mar 17 17:45:36.668731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:45:36.673788 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:45:36.705718 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:45:36.709520 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:45:36.711312 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:45:36.721294 systemd-networkd[1422]: lo: Link UP Mar 17 17:45:36.721307 systemd-networkd[1422]: lo: Gained carrier Mar 17 17:45:36.732535 systemd-networkd[1422]: Enumeration completed Mar 17 17:45:36.733785 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:45:36.733795 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:45:36.734844 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:45:36.736375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:45:36.736507 systemd-networkd[1422]: eth0: Link UP Mar 17 17:45:36.736514 systemd-networkd[1422]: eth0: Gained carrier Mar 17 17:45:36.736532 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:45:36.736716 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:45:36.745016 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:45:36.745275 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:45:36.738960 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:45:36.753757 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:45:36.755551 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Mar 17 17:45:36.756403 systemd[1]: Reached target network.target - Network. Mar 17 17:45:36.757619 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:45:36.757674 systemd-timesyncd[1424]: Initial clock synchronization to Mon 2025-03-17 17:45:36.968767 UTC. Mar 17 17:45:36.765997 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:45:36.778971 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:45:36.774462 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:45:36.778466 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:45:36.803305 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:45:36.814772 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:45:36.856912 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:45:36.857410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:45:36.895118 kernel: kvm_amd: TSC scaling supported Mar 17 17:45:36.895186 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:45:36.895201 kernel: kvm_amd: Nested Paging enabled Mar 17 17:45:36.896191 kernel: kvm_amd: LBR virtualization supported Mar 17 17:45:36.896216 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:45:36.896844 kernel: kvm_amd: Virtual GIF supported Mar 17 17:45:36.916787 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:45:36.943541 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:45:36.981931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:45:36.983916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:45:36.990900 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:45:37.027111 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:45:37.028823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:45:37.030093 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:45:37.031326 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:45:37.032623 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:45:37.034106 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:45:37.035372 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:45:37.036655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:45:37.037984 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:45:37.038017 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:45:37.038965 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:45:37.040865 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:45:37.043734 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:45:37.047565 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:45:37.049124 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:45:37.050481 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:45:37.059426 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:45:37.061644 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:45:37.064849 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:45:37.066906 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:45:37.068491 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:45:37.069742 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:45:37.069868 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:45:37.069898 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:45:37.071211 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:45:37.073734 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:45:37.078743 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:45:37.079196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:45:37.088014 jq[1482]: false Mar 17 17:45:37.088925 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:45:37.090640 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:45:37.094208 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:45:37.097935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:45:37.102942 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:45:37.105557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:45:37.112781 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:45:37.114837 extend-filesystems[1483]: Found loop3 Mar 17 17:45:37.114837 extend-filesystems[1483]: Found loop4 Mar 17 17:45:37.114837 extend-filesystems[1483]: Found loop5 Mar 17 17:45:37.114837 extend-filesystems[1483]: Found sr0 Mar 17 17:45:37.114837 extend-filesystems[1483]: Found vda Mar 17 17:45:37.114837 extend-filesystems[1483]: Found vda1 Mar 17 17:45:37.114837 extend-filesystems[1483]: Found vda2 Mar 17 17:45:37.119590 extend-filesystems[1483]: Found vda3 Mar 17 17:45:37.119590 extend-filesystems[1483]: Found usr Mar 17 17:45:37.119590 extend-filesystems[1483]: Found vda4 Mar 17 17:45:37.119590 extend-filesystems[1483]: Found vda6 Mar 17 17:45:37.119590 extend-filesystems[1483]: Found vda7 Mar 17 17:45:37.119590 extend-filesystems[1483]: Found vda9 Mar 17 17:45:37.119590 extend-filesystems[1483]: Checking size of /dev/vda9 Mar 17 17:45:37.115159 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:45:37.121911 dbus-daemon[1481]: [system] SELinux support is enabled Mar 17 17:45:37.115826 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:45:37.124626 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:45:37.130090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:45:37.132937 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:45:37.140465 extend-filesystems[1483]: Resized partition /dev/vda9 Mar 17 17:45:37.147206 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:45:37.143478 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:45:37.148043 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:45:37.154488 jq[1498]: true Mar 17 17:45:37.148392 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:45:37.156811 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:45:37.148939 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:45:37.149276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:45:37.160546 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:45:37.160934 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:45:37.173652 update_engine[1497]: I20250317 17:45:37.173540 1497 main.cc:92] Flatcar Update Engine starting Mar 17 17:45:37.188809 update_engine[1497]: I20250317 17:45:37.187243 1497 update_check_scheduler.cc:74] Next update check in 8m34s Mar 17 17:45:37.189553 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:45:37.190817 jq[1507]: true Mar 17 17:45:37.200766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1423) Mar 17 17:45:37.230139 tar[1506]: linux-amd64/LICENSE Mar 17 17:45:37.234434 tar[1506]: linux-amd64/helm Mar 17 17:45:37.233985 systemd-logind[1491]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:45:37.237395 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:45:37.234018 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:45:37.234600 systemd-logind[1491]: New seat seat0. Mar 17 17:45:37.246658 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:45:37.251396 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:45:37.274225 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:45:37.274225 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:45:37.274225 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:45:37.259038 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:45:37.281196 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Mar 17 17:45:37.259233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:45:37.261067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:45:37.261176 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:45:37.274190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:45:37.285509 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:45:37.286416 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:45:37.291891 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:45:37.292797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:45:37.297329 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:45:37.321618 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:45:37.504613 containerd[1511]: time="2025-03-17T17:45:37.504506483Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:45:37.537969 containerd[1511]: time="2025-03-17T17:45:37.537817188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.539992 containerd[1511]: time="2025-03-17T17:45:37.539948636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:45:37.539992 containerd[1511]: time="2025-03-17T17:45:37.539979399Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:45:37.540092 containerd[1511]: time="2025-03-17T17:45:37.539999464Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:45:37.540267 containerd[1511]: time="2025-03-17T17:45:37.540234082Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:45:37.540267 containerd[1511]: time="2025-03-17T17:45:37.540259587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540386 containerd[1511]: time="2025-03-17T17:45:37.540360409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540431 containerd[1511]: time="2025-03-17T17:45:37.540384918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540734 containerd[1511]: time="2025-03-17T17:45:37.540688002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540734 containerd[1511]: time="2025-03-17T17:45:37.540710876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540814 containerd[1511]: time="2025-03-17T17:45:37.540765395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540814 containerd[1511]: time="2025-03-17T17:45:37.540781203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.540960 containerd[1511]: time="2025-03-17T17:45:37.540927885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.541279 containerd[1511]: time="2025-03-17T17:45:37.541243825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:45:37.542533 containerd[1511]: time="2025-03-17T17:45:37.542486633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:45:37.542533 containerd[1511]: time="2025-03-17T17:45:37.542516458Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:45:37.542685 containerd[1511]: time="2025-03-17T17:45:37.542655685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:45:37.542801 containerd[1511]: time="2025-03-17T17:45:37.542772592Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:45:37.548730 containerd[1511]: time="2025-03-17T17:45:37.548665498Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:45:37.548798 containerd[1511]: time="2025-03-17T17:45:37.548738715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:45:37.548798 containerd[1511]: time="2025-03-17T17:45:37.548761517Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:45:37.548798 containerd[1511]: time="2025-03-17T17:45:37.548782622Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:45:37.548897 containerd[1511]: time="2025-03-17T17:45:37.548800455Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:45:37.548999 containerd[1511]: time="2025-03-17T17:45:37.548974845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:45:37.549273 containerd[1511]: time="2025-03-17T17:45:37.549249316Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:45:37.549434 containerd[1511]: time="2025-03-17T17:45:37.549392502Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:45:37.549434 containerd[1511]: time="2025-03-17T17:45:37.549431172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:45:37.549547 containerd[1511]: time="2025-03-17T17:45:37.549450056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:45:37.549547 containerd[1511]: time="2025-03-17T17:45:37.549467776Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549547 containerd[1511]: time="2025-03-17T17:45:37.549484633Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549547 containerd[1511]: time="2025-03-17T17:45:37.549515600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549547 containerd[1511]: time="2025-03-17T17:45:37.549538752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549557285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549574584Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549590062Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549605048Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549630750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549648183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549665069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549682369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549707 containerd[1511]: time="2025-03-17T17:45:37.549698547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549733453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549753241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549771744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549790226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549811649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549831324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549849652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549869152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549896232Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549922695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549941011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.549980 containerd[1511]: time="2025-03-17T17:45:37.549957909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.550878525Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.550909586Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551001850Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551020497Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551033229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551059332Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551080015Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:45:37.551056 containerd[1511]: time="2025-03-17T17:45:37.551098311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:45:37.551741 containerd[1511]: time="2025-03-17T17:45:37.551423672Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:45:37.551741 containerd[1511]: time="2025-03-17T17:45:37.551500870Z" level=info msg="Connect containerd service" Mar 17 17:45:37.551741 containerd[1511]: time="2025-03-17T17:45:37.551534090Z" level=info msg="using legacy CRI server" Mar 17 17:45:37.551741 containerd[1511]: time="2025-03-17T17:45:37.551544426Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:45:37.551741 containerd[1511]: time="2025-03-17T17:45:37.551673954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:45:37.554423 containerd[1511]: time="2025-03-17T17:45:37.554371570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:45:37.554987 containerd[1511]: time="2025-03-17T17:45:37.554935652Z" level=info msg="Start subscribing containerd event" Mar 17 17:45:37.555113 containerd[1511]: time="2025-03-17T17:45:37.554988217Z" level=info msg="Start recovering state" Mar 17 17:45:37.555113 containerd[1511]: time="2025-03-17T17:45:37.555065107Z" level=info msg="Start event monitor" Mar 17 17:45:37.555113 containerd[1511]: time="2025-03-17T17:45:37.555078652Z" level=info msg="Start snapshots syncer" Mar 17 17:45:37.555113 containerd[1511]: time="2025-03-17T17:45:37.555088629Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:45:37.555113 containerd[1511]: time="2025-03-17T17:45:37.555100168Z" level=info msg="Start streaming server" Mar 17 17:45:37.555455 containerd[1511]: time="2025-03-17T17:45:37.555345471Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:45:37.555529 containerd[1511]: time="2025-03-17T17:45:37.555469917Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:45:37.555683 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:45:37.557283 containerd[1511]: time="2025-03-17T17:45:37.557224159Z" level=info msg="containerd successfully booted in 0.054087s" Mar 17 17:45:37.568737 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:45:37.604004 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:45:37.615011 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:45:37.624524 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:45:37.624849 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:45:37.649986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:45:37.672686 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:45:37.681085 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:45:37.683835 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:45:37.685234 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:45:37.794371 tar[1506]: linux-amd64/README.md Mar 17 17:45:37.812072 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:45:38.516025 systemd-networkd[1422]: eth0: Gained IPv6LL Mar 17 17:45:38.520739 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:45:38.522737 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:45:38.533021 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:45:38.536097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:38.538866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:45:38.563749 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:45:38.564095 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:45:38.565902 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:45:38.569216 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:45:39.935125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:39.935855 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:45:39.937941 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:45:39.940025 systemd[1]: Startup finished in 1.018s (kernel) + 7.488s (initrd) + 5.775s (userspace) = 14.283s. Mar 17 17:45:40.668779 kubelet[1594]: E0317 17:45:40.668696 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:45:40.673263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:45:40.673472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:45:40.673926 systemd[1]: kubelet.service: Consumed 1.931s CPU time, 253.2M memory peak. Mar 17 17:45:42.463657 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:45:42.475042 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:59014.service - OpenSSH per-connection server daemon (10.0.0.1:59014). Mar 17 17:45:42.524072 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 59014 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:42.526780 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:42.542774 systemd-logind[1491]: New session 1 of user core. Mar 17 17:45:42.544539 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:45:42.558433 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:45:42.649107 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:45:42.662986 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:45:42.666162 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:45:42.669407 systemd-logind[1491]: New session c1 of user core. Mar 17 17:45:43.015744 systemd[1612]: Queued start job for default target default.target. Mar 17 17:45:43.030530 systemd[1612]: Created slice app.slice - User Application Slice. Mar 17 17:45:43.030564 systemd[1612]: Reached target paths.target - Paths. Mar 17 17:45:43.030613 systemd[1612]: Reached target timers.target - Timers. Mar 17 17:45:43.032563 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:45:43.048511 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:45:43.048687 systemd[1612]: Reached target sockets.target - Sockets. Mar 17 17:45:43.048765 systemd[1612]: Reached target basic.target - Basic System. Mar 17 17:45:43.048813 systemd[1612]: Reached target default.target - Main User Target. Mar 17 17:45:43.048850 systemd[1612]: Startup finished in 368ms. Mar 17 17:45:43.049355 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:45:43.051425 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:45:43.124032 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:59022.service - OpenSSH per-connection server daemon (10.0.0.1:59022). Mar 17 17:45:43.161506 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 59022 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:43.163652 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:43.168578 systemd-logind[1491]: New session 2 of user core. Mar 17 17:45:43.183950 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:45:43.241776 sshd[1625]: Connection closed by 10.0.0.1 port 59022 Mar 17 17:45:43.242226 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:43.256239 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:59022.service: Deactivated successfully. Mar 17 17:45:43.258332 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:45:43.260183 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:45:43.270170 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:59032.service - OpenSSH per-connection server daemon (10.0.0.1:59032). Mar 17 17:45:43.271792 systemd-logind[1491]: Removed session 2. Mar 17 17:45:43.313596 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 59032 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:43.315602 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:43.321210 systemd-logind[1491]: New session 3 of user core. Mar 17 17:45:43.331903 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:45:43.386828 sshd[1633]: Connection closed by 10.0.0.1 port 59032 Mar 17 17:45:43.387299 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:43.406259 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:59032.service: Deactivated successfully. Mar 17 17:45:43.408838 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:45:43.410938 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:45:43.412525 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). Mar 17 17:45:43.413931 systemd-logind[1491]: Removed session 3. Mar 17 17:45:43.455858 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:43.458021 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:43.465377 systemd-logind[1491]: New session 4 of user core. Mar 17 17:45:43.475282 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:45:43.534462 sshd[1641]: Connection closed by 10.0.0.1 port 59046 Mar 17 17:45:43.534870 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:43.548803 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:59046.service: Deactivated successfully. Mar 17 17:45:43.551332 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:45:43.552972 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:45:43.562575 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:59062.service - OpenSSH per-connection server daemon (10.0.0.1:59062). Mar 17 17:45:43.564406 systemd-logind[1491]: Removed session 4. Mar 17 17:45:43.601772 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 59062 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:43.604111 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:43.610650 systemd-logind[1491]: New session 5 of user core. Mar 17 17:45:43.627058 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:45:43.694065 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:45:43.694519 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:45:43.720733 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 17 17:45:43.722788 sshd[1649]: Connection closed by 10.0.0.1 port 59062 Mar 17 17:45:43.723410 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:43.737402 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:59062.service: Deactivated successfully. Mar 17 17:45:43.739505 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:45:43.740568 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:45:43.749014 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:59070.service - OpenSSH per-connection server daemon (10.0.0.1:59070). Mar 17 17:45:43.750327 systemd-logind[1491]: Removed session 5. Mar 17 17:45:43.790195 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 59070 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:43.792290 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:43.797740 systemd-logind[1491]: New session 6 of user core. Mar 17 17:45:43.819975 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:45:43.877916 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:45:43.878300 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:45:43.883357 sudo[1660]: pam_unix(sudo:session): session closed for user root Mar 17 17:45:43.892559 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:45:43.893036 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:45:43.913159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:45:43.949962 augenrules[1682]: No rules Mar 17 17:45:43.951873 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:45:43.952193 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:45:43.953595 sudo[1659]: pam_unix(sudo:session): session closed for user root Mar 17 17:45:43.955557 sshd[1658]: Connection closed by 10.0.0.1 port 59070 Mar 17 17:45:43.956071 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Mar 17 17:45:43.965034 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:59070.service: Deactivated successfully. Mar 17 17:45:43.967255 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:45:43.969137 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:45:43.978016 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:59076.service - OpenSSH per-connection server daemon (10.0.0.1:59076). Mar 17 17:45:43.979116 systemd-logind[1491]: Removed session 6. Mar 17 17:45:44.018909 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:45:44.021308 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:45:44.026611 systemd-logind[1491]: New session 7 of user core. Mar 17 17:45:44.035856 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:45:44.091506 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:45:44.091893 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:45:44.779167 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:45:44.779174 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:45:45.044374 dockerd[1714]: time="2025-03-17T17:45:45.043762591Z" level=info msg="Starting up" Mar 17 17:45:46.071473 dockerd[1714]: time="2025-03-17T17:45:46.071386274Z" level=info msg="Loading containers: start." Mar 17 17:45:46.554734 kernel: Initializing XFRM netlink socket Mar 17 17:45:46.657886 systemd-networkd[1422]: docker0: Link UP Mar 17 17:45:46.927453 dockerd[1714]: time="2025-03-17T17:45:46.927313525Z" level=info msg="Loading containers: done." Mar 17 17:45:46.941365 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4082140751-merged.mount: Deactivated successfully. Mar 17 17:45:47.268490 dockerd[1714]: time="2025-03-17T17:45:47.268335245Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:45:47.268490 dockerd[1714]: time="2025-03-17T17:45:47.268468124Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:45:47.269015 dockerd[1714]: time="2025-03-17T17:45:47.268644990Z" level=info msg="Daemon has completed initialization" Mar 17 17:45:47.806931 dockerd[1714]: time="2025-03-17T17:45:47.806839368Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:45:47.807083 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:45:48.397950 containerd[1511]: time="2025-03-17T17:45:48.397887517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:45:49.224309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883893046.mount: Deactivated successfully. Mar 17 17:45:50.512884 containerd[1511]: time="2025-03-17T17:45:50.512807815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:50.513720 containerd[1511]: time="2025-03-17T17:45:50.513619421Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 17 17:45:50.515309 containerd[1511]: time="2025-03-17T17:45:50.515248661Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:50.518177 containerd[1511]: time="2025-03-17T17:45:50.518148118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:50.519275 containerd[1511]: time="2025-03-17T17:45:50.519241924Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 2.12130148s" Mar 17 17:45:50.519337 containerd[1511]: time="2025-03-17T17:45:50.519280526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 17:45:50.519984 containerd[1511]: time="2025-03-17T17:45:50.519950960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:45:50.924225 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:45:50.933010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:45:51.145218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:45:51.151546 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:45:51.574488 kubelet[1975]: E0317 17:45:51.574401 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:45:51.584150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:45:51.584522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:45:51.585111 systemd[1]: kubelet.service: Consumed 293ms CPU time, 106.8M memory peak. Mar 17 17:45:53.203140 containerd[1511]: time="2025-03-17T17:45:53.203044942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:53.206882 containerd[1511]: time="2025-03-17T17:45:53.206788379Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 17 17:45:53.217557 containerd[1511]: time="2025-03-17T17:45:53.217478812Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:53.226705 containerd[1511]: time="2025-03-17T17:45:53.226642370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:53.228243 containerd[1511]: time="2025-03-17T17:45:53.228101337Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 2.708118s" Mar 17 17:45:53.228310 containerd[1511]: time="2025-03-17T17:45:53.228247054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 17:45:53.228875 containerd[1511]: time="2025-03-17T17:45:53.228776579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:45:55.516024 containerd[1511]: time="2025-03-17T17:45:55.515930182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:55.517296 containerd[1511]: time="2025-03-17T17:45:55.517229233Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 17 17:45:55.518736 containerd[1511]: time="2025-03-17T17:45:55.518671263Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:55.530158 containerd[1511]: time="2025-03-17T17:45:55.529816992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:55.531589 containerd[1511]: time="2025-03-17T17:45:55.531485116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 2.302601257s" Mar 17 17:45:55.531589 containerd[1511]: time="2025-03-17T17:45:55.531572499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 17:45:55.532248 containerd[1511]: time="2025-03-17T17:45:55.532205600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:45:56.788357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180427976.mount: Deactivated successfully. Mar 17 17:45:57.992286 containerd[1511]: time="2025-03-17T17:45:57.992198695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:57.994972 containerd[1511]: time="2025-03-17T17:45:57.994933370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 17 17:45:57.997031 containerd[1511]: time="2025-03-17T17:45:57.996998131Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:58.004971 containerd[1511]: time="2025-03-17T17:45:58.004920094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:45:58.005569 containerd[1511]: time="2025-03-17T17:45:58.005526162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 2.47327823s" Mar 17 17:45:58.005639 containerd[1511]: time="2025-03-17T17:45:58.005570205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 17:45:58.006112 containerd[1511]: time="2025-03-17T17:45:58.006083340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:45:58.625262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987647192.mount: Deactivated successfully. Mar 17 17:46:00.154055 containerd[1511]: time="2025-03-17T17:46:00.153958574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:00.175680 containerd[1511]: time="2025-03-17T17:46:00.175585408Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 17 17:46:00.490755 containerd[1511]: time="2025-03-17T17:46:00.490548769Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:00.621606 containerd[1511]: time="2025-03-17T17:46:00.621531175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:00.622585 containerd[1511]: time="2025-03-17T17:46:00.622539885Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.616392135s" Mar 17 17:46:00.622585 containerd[1511]: time="2025-03-17T17:46:00.622581614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 17:46:00.623137 containerd[1511]: time="2025-03-17T17:46:00.623090403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:46:01.827228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:46:01.843981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:02.021050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:02.025266 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:02.710567 kubelet[2060]: E0317 17:46:02.710435 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:02.715520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:02.715771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:02.716227 systemd[1]: kubelet.service: Consumed 421ms CPU time, 106.7M memory peak. Mar 17 17:46:05.469724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516583351.mount: Deactivated successfully. Mar 17 17:46:05.759890 containerd[1511]: time="2025-03-17T17:46:05.759730416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:05.791730 containerd[1511]: time="2025-03-17T17:46:05.791594282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:46:05.825309 containerd[1511]: time="2025-03-17T17:46:05.825242797Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:05.846235 containerd[1511]: time="2025-03-17T17:46:05.846181747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:05.847203 containerd[1511]: time="2025-03-17T17:46:05.847167245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.224048702s" Mar 17 17:46:05.847264 containerd[1511]: time="2025-03-17T17:46:05.847201281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:46:05.847724 containerd[1511]: time="2025-03-17T17:46:05.847651158Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:46:06.604875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717840416.mount: Deactivated successfully. Mar 17 17:46:10.607625 containerd[1511]: time="2025-03-17T17:46:10.605870661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:10.610358 containerd[1511]: time="2025-03-17T17:46:10.609469381Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 17 17:46:10.616972 containerd[1511]: time="2025-03-17T17:46:10.614094670Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:10.630252 containerd[1511]: time="2025-03-17T17:46:10.629581111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.781883609s" Mar 17 17:46:10.630252 containerd[1511]: time="2025-03-17T17:46:10.629648076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 17:46:10.632045 containerd[1511]: time="2025-03-17T17:46:10.630593236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:12.827400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:46:12.844137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:13.166010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:13.175271 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:46:13.315524 kubelet[2158]: E0317 17:46:13.315449 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:46:13.326190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:46:13.326455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:46:13.326968 systemd[1]: kubelet.service: Consumed 316ms CPU time, 106.5M memory peak. Mar 17 17:46:14.579394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:14.584960 systemd[1]: kubelet.service: Consumed 316ms CPU time, 106.5M memory peak. Mar 17 17:46:14.625012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:14.748707 systemd[1]: Reload requested from client PID 2176 ('systemctl') (unit session-7.scope)... Mar 17 17:46:14.749875 systemd[1]: Reloading... Mar 17 17:46:15.056944 zram_generator::config[2229]: No configuration found. Mar 17 17:46:16.773688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:16.943656 systemd[1]: Reloading finished in 2192 ms. Mar 17 17:46:17.130527 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:46:17.130714 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:46:17.131269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:17.131421 systemd[1]: kubelet.service: Consumed 201ms CPU time, 90.8M memory peak. Mar 17 17:46:17.150301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:17.374774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:17.379319 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:46:17.519968 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:17.519968 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:46:17.519968 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:17.519968 kubelet[2266]: I0317 17:46:17.519922 2266 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:46:17.984073 kubelet[2266]: I0317 17:46:17.984015 2266 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:46:17.984073 kubelet[2266]: I0317 17:46:17.984053 2266 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:46:17.984383 kubelet[2266]: I0317 17:46:17.984360 2266 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:46:18.064369 kubelet[2266]: E0317 17:46:18.064281 2266 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:18.075290 kubelet[2266]: I0317 17:46:18.075212 2266 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:46:18.087752 kubelet[2266]: E0317 17:46:18.087667 2266 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:46:18.087752 kubelet[2266]: I0317 17:46:18.087737 2266 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:46:18.094496 kubelet[2266]: I0317 17:46:18.094452 2266 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:46:18.097376 kubelet[2266]: I0317 17:46:18.097291 2266 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:46:18.097583 kubelet[2266]: I0317 17:46:18.097346 2266 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:46:18.097583 kubelet[2266]: I0317 17:46:18.097574 2266 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:46:18.097583 kubelet[2266]: I0317 17:46:18.097584 2266 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:46:18.097814 kubelet[2266]: I0317 17:46:18.097763 2266 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:18.101904 kubelet[2266]: I0317 17:46:18.101850 2266 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:46:18.101904 kubelet[2266]: I0317 17:46:18.101880 2266 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:46:18.101904 kubelet[2266]: I0317 17:46:18.101908 2266 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:46:18.102079 kubelet[2266]: I0317 17:46:18.101927 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:46:18.113541 kubelet[2266]: I0317 17:46:18.113469 2266 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:46:18.114072 kubelet[2266]: I0317 17:46:18.114028 2266 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:46:18.115278 kubelet[2266]: W0317 17:46:18.115221 2266 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:46:18.116306 kubelet[2266]: W0317 17:46:18.116236 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:18.116398 kubelet[2266]: E0317 17:46:18.116348 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:18.116398 kubelet[2266]: W0317 17:46:18.116237 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:18.116398 kubelet[2266]: E0317 17:46:18.116385 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:18.120114 kubelet[2266]: I0317 17:46:18.120027 2266 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:46:18.120114 kubelet[2266]: I0317 17:46:18.120128 2266 server.go:1287] "Started kubelet" Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.120502 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.121051 2266 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.121116 2266 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.122023 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.122140 2266 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.122251 2266 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:46:18.122749 kubelet[2266]: I0317 17:46:18.122525 2266 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:46:18.124890 kubelet[2266]: W0317 17:46:18.124100 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:18.124890 kubelet[2266]: E0317 17:46:18.124150 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:18.124890 kubelet[2266]: E0317 17:46:18.124505 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.124890 kubelet[2266]: E0317 17:46:18.124574 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Mar 17 17:46:18.124890 kubelet[2266]: I0317 17:46:18.124663 2266 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:46:18.125145 kubelet[2266]: I0317 17:46:18.124687 2266 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:46:18.125190 kubelet[2266]: E0317 17:46:18.125170 2266 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:46:18.125551 kubelet[2266]: I0317 17:46:18.125309 2266 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:46:18.125551 kubelet[2266]: I0317 17:46:18.125394 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:46:18.126815 kubelet[2266]: I0317 17:46:18.126768 2266 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:46:18.134717 kubelet[2266]: E0317 17:46:18.132241 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da835639aec96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:46:18.120080534 +0000 UTC m=+0.733068064,LastTimestamp:2025-03-17 17:46:18.120080534 +0000 UTC m=+0.733068064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:46:18.147144 kubelet[2266]: I0317 17:46:18.147060 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:46:18.150339 kubelet[2266]: I0317 17:46:18.150290 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:46:18.150408 kubelet[2266]: I0317 17:46:18.150352 2266 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:46:18.150445 kubelet[2266]: I0317 17:46:18.150403 2266 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:46:18.150445 kubelet[2266]: I0317 17:46:18.150418 2266 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:46:18.150557 kubelet[2266]: E0317 17:46:18.150509 2266 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:46:18.153341 kubelet[2266]: W0317 17:46:18.153202 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:18.153341 kubelet[2266]: E0317 17:46:18.153298 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:18.158625 kubelet[2266]: I0317 17:46:18.158595 2266 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:46:18.158625 kubelet[2266]: I0317 17:46:18.158617 2266 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:46:18.158797 kubelet[2266]: I0317 17:46:18.158642 2266 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:18.225503 kubelet[2266]: E0317 17:46:18.225443 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.251019 kubelet[2266]: E0317 17:46:18.250824 2266 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:46:18.325748 kubelet[2266]: E0317 17:46:18.325674 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.325925 kubelet[2266]: E0317 17:46:18.325780 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Mar 17 17:46:18.426485 kubelet[2266]: E0317 17:46:18.426418 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.451782 kubelet[2266]: E0317 17:46:18.451714 2266 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:46:18.527485 kubelet[2266]: E0317 17:46:18.527274 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.628296 kubelet[2266]: E0317 17:46:18.628189 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.727412 kubelet[2266]: E0317 17:46:18.727344 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Mar 17 17:46:18.728548 kubelet[2266]: E0317 17:46:18.728499 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.798190 kubelet[2266]: I0317 17:46:18.798008 2266 policy_none.go:49] "None policy: Start" Mar 17 17:46:18.798190 kubelet[2266]: I0317 17:46:18.798060 2266 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:46:18.798190 kubelet[2266]: I0317 17:46:18.798077 2266 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:46:18.827950 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:46:18.842543 kubelet[2266]: E0317 17:46:18.828925 2266 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:18.848120 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:46:18.852132 kubelet[2266]: E0317 17:46:18.852084 2266 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:46:18.852538 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:46:18.865361 kubelet[2266]: I0317 17:46:18.865307 2266 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:46:18.865668 kubelet[2266]: I0317 17:46:18.865630 2266 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:46:18.865748 kubelet[2266]: I0317 17:46:18.865661 2266 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:46:18.865978 kubelet[2266]: I0317 17:46:18.865945 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:46:18.866954 kubelet[2266]: E0317 17:46:18.866925 2266 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:46:18.867060 kubelet[2266]: E0317 17:46:18.866985 2266 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:46:18.967918 kubelet[2266]: I0317 17:46:18.967866 2266 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:18.968313 kubelet[2266]: E0317 17:46:18.968284 2266 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Mar 17 17:46:18.988176 kubelet[2266]: W0317 17:46:18.988070 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:18.988176 kubelet[2266]: E0317 17:46:18.988171 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:19.170234 kubelet[2266]: I0317 17:46:19.170176 2266 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:19.170679 kubelet[2266]: E0317 17:46:19.170628 2266 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Mar 17 17:46:19.422727 kubelet[2266]: W0317 17:46:19.422486 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:19.422727 kubelet[2266]: E0317 17:46:19.422575 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:19.486819 kubelet[2266]: W0317 17:46:19.486753 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:19.486913 kubelet[2266]: E0317 17:46:19.486816 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:19.527946 kubelet[2266]: E0317 17:46:19.527899 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Mar 17 17:46:19.572673 kubelet[2266]: I0317 17:46:19.572616 2266 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:19.573065 kubelet[2266]: E0317 17:46:19.572997 2266 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Mar 17 17:46:19.658201 kubelet[2266]: W0317 17:46:19.658171 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:19.658329 kubelet[2266]: E0317 17:46:19.658206 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:19.661764 systemd[1]: Created slice kubepods-burstable-pod88878094edf6e0e5e19e2253a422994b.slice - libcontainer container kubepods-burstable-pod88878094edf6e0e5e19e2253a422994b.slice. Mar 17 17:46:19.680380 kubelet[2266]: E0317 17:46:19.680304 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:19.684246 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 17 17:46:19.694985 kubelet[2266]: E0317 17:46:19.694950 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:19.696969 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 17 17:46:19.698918 kubelet[2266]: E0317 17:46:19.698885 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:19.734362 kubelet[2266]: I0317 17:46:19.734320 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:19.734362 kubelet[2266]: I0317 17:46:19.734358 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:19.734449 kubelet[2266]: I0317 17:46:19.734382 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:19.734449 kubelet[2266]: I0317 17:46:19.734396 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:19.734449 kubelet[2266]: I0317 17:46:19.734412 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:19.734449 kubelet[2266]: I0317 17:46:19.734424 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:19.734449 kubelet[2266]: I0317 17:46:19.734439 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:19.734588 kubelet[2266]: I0317 17:46:19.734453 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:19.734588 kubelet[2266]: I0317 17:46:19.734469 2266 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:19.981293 kubelet[2266]: E0317 17:46:19.981150 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:19.982037 containerd[1511]: time="2025-03-17T17:46:19.981985914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88878094edf6e0e5e19e2253a422994b,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:19.996286 kubelet[2266]: E0317 17:46:19.996254 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:19.996652 containerd[1511]: time="2025-03-17T17:46:19.996626229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:19.999936 kubelet[2266]: E0317 17:46:19.999916 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:20.000227 containerd[1511]: time="2025-03-17T17:46:20.000188358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:20.257709 kubelet[2266]: E0317 17:46:20.255966 2266 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:20.375298 kubelet[2266]: I0317 17:46:20.375264 2266 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:20.376361 kubelet[2266]: E0317 17:46:20.375615 2266 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Mar 17 17:46:20.843013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234488804.mount: Deactivated successfully. Mar 17 17:46:20.943339 containerd[1511]: time="2025-03-17T17:46:20.943262172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:20.947587 containerd[1511]: time="2025-03-17T17:46:20.947532157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:46:20.955161 containerd[1511]: time="2025-03-17T17:46:20.955112169Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:20.957747 containerd[1511]: time="2025-03-17T17:46:20.957663413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:20.961347 containerd[1511]: time="2025-03-17T17:46:20.961292917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:46:20.962739 containerd[1511]: time="2025-03-17T17:46:20.962686830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:20.963955 containerd[1511]: time="2025-03-17T17:46:20.963882842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:46:20.964732 containerd[1511]: time="2025-03-17T17:46:20.964674415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 982.593036ms" Mar 17 17:46:20.966142 containerd[1511]: time="2025-03-17T17:46:20.966068699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:46:20.967863 containerd[1511]: time="2025-03-17T17:46:20.967806292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 971.127941ms" Mar 17 17:46:20.970305 containerd[1511]: time="2025-03-17T17:46:20.970270048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 970.023066ms" Mar 17 17:46:21.172074 kubelet[2266]: E0317 17:46:21.129009 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Mar 17 17:46:21.188808 containerd[1511]: time="2025-03-17T17:46:21.186922302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:21.188808 containerd[1511]: time="2025-03-17T17:46:21.187021502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:21.188808 containerd[1511]: time="2025-03-17T17:46:21.187039691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.188808 containerd[1511]: time="2025-03-17T17:46:21.187158953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.189318 containerd[1511]: time="2025-03-17T17:46:21.188282938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:21.189318 containerd[1511]: time="2025-03-17T17:46:21.188344357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:21.189318 containerd[1511]: time="2025-03-17T17:46:21.188357595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.189318 containerd[1511]: time="2025-03-17T17:46:21.188447075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.189896 containerd[1511]: time="2025-03-17T17:46:21.189810846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:21.190179 containerd[1511]: time="2025-03-17T17:46:21.189995136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:21.190179 containerd[1511]: time="2025-03-17T17:46:21.190026031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.190179 containerd[1511]: time="2025-03-17T17:46:21.190120441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:21.235851 kubelet[2266]: W0317 17:46:21.235761 2266 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Mar 17 17:46:21.235851 kubelet[2266]: E0317 17:46:21.235827 2266 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:46:21.264042 systemd[1]: Started cri-containerd-2b97c5e2ea60cb355cbe569f89fa71151618affe47d3413203e2ff5dd0d4ced8.scope - libcontainer container 2b97c5e2ea60cb355cbe569f89fa71151618affe47d3413203e2ff5dd0d4ced8. Mar 17 17:46:21.269074 systemd[1]: Started cri-containerd-8e4fae22d2bb22613bf84307f209ceeb4dad4606fecaf3d5f36aea961d7aad4e.scope - libcontainer container 8e4fae22d2bb22613bf84307f209ceeb4dad4606fecaf3d5f36aea961d7aad4e. Mar 17 17:46:21.286937 systemd[1]: Started cri-containerd-306d1853db511bba4ae246b531068b1d2a760385e3980eb0d655375038da839e.scope - libcontainer container 306d1853db511bba4ae246b531068b1d2a760385e3980eb0d655375038da839e. Mar 17 17:46:21.392296 containerd[1511]: time="2025-03-17T17:46:21.392239192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4fae22d2bb22613bf84307f209ceeb4dad4606fecaf3d5f36aea961d7aad4e\"" Mar 17 17:46:21.394393 kubelet[2266]: E0317 17:46:21.394361 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:21.398016 containerd[1511]: time="2025-03-17T17:46:21.397979060Z" level=info msg="CreateContainer within sandbox \"8e4fae22d2bb22613bf84307f209ceeb4dad4606fecaf3d5f36aea961d7aad4e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:46:21.405080 containerd[1511]: time="2025-03-17T17:46:21.405022484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"306d1853db511bba4ae246b531068b1d2a760385e3980eb0d655375038da839e\"" Mar 17 17:46:21.406590 kubelet[2266]: E0317 17:46:21.406561 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:21.407811 containerd[1511]: time="2025-03-17T17:46:21.407708870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88878094edf6e0e5e19e2253a422994b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b97c5e2ea60cb355cbe569f89fa71151618affe47d3413203e2ff5dd0d4ced8\"" Mar 17 17:46:21.408551 kubelet[2266]: E0317 17:46:21.408514 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:21.409205 containerd[1511]: time="2025-03-17T17:46:21.409169597Z" level=info msg="CreateContainer within sandbox \"306d1853db511bba4ae246b531068b1d2a760385e3980eb0d655375038da839e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:46:21.412310 containerd[1511]: time="2025-03-17T17:46:21.411812953Z" level=info msg="CreateContainer within sandbox \"2b97c5e2ea60cb355cbe569f89fa71151618affe47d3413203e2ff5dd0d4ced8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:46:21.423235 containerd[1511]: time="2025-03-17T17:46:21.423065270Z" level=info msg="CreateContainer within sandbox \"8e4fae22d2bb22613bf84307f209ceeb4dad4606fecaf3d5f36aea961d7aad4e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f7a77104ee31aaa62ee7fe236b551acff55d3d822516db6ac557dcb7aee8cb8\"" Mar 17 17:46:21.424156 containerd[1511]: time="2025-03-17T17:46:21.424075485Z" level=info msg="StartContainer for \"6f7a77104ee31aaa62ee7fe236b551acff55d3d822516db6ac557dcb7aee8cb8\"" Mar 17 17:46:21.436583 containerd[1511]: time="2025-03-17T17:46:21.436517164Z" level=info msg="CreateContainer within sandbox \"306d1853db511bba4ae246b531068b1d2a760385e3980eb0d655375038da839e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29d1b10342aa35f114f4ce406a8a415cfcc2a075971c234885e3cc2c504120fe\"" Mar 17 17:46:21.438041 containerd[1511]: time="2025-03-17T17:46:21.437972660Z" level=info msg="StartContainer for \"29d1b10342aa35f114f4ce406a8a415cfcc2a075971c234885e3cc2c504120fe\"" Mar 17 17:46:21.451664 containerd[1511]: time="2025-03-17T17:46:21.451510286Z" level=info msg="CreateContainer within sandbox \"2b97c5e2ea60cb355cbe569f89fa71151618affe47d3413203e2ff5dd0d4ced8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47acd5e04ff22c34b33bf3746b53ef63cec70d5a407c3d5522b4298a122d3b32\"" Mar 17 17:46:21.452182 containerd[1511]: time="2025-03-17T17:46:21.452147141Z" level=info msg="StartContainer for \"47acd5e04ff22c34b33bf3746b53ef63cec70d5a407c3d5522b4298a122d3b32\"" Mar 17 17:46:21.457924 systemd[1]: Started cri-containerd-6f7a77104ee31aaa62ee7fe236b551acff55d3d822516db6ac557dcb7aee8cb8.scope - libcontainer container 6f7a77104ee31aaa62ee7fe236b551acff55d3d822516db6ac557dcb7aee8cb8. Mar 17 17:46:21.501048 systemd[1]: Started cri-containerd-29d1b10342aa35f114f4ce406a8a415cfcc2a075971c234885e3cc2c504120fe.scope - libcontainer container 29d1b10342aa35f114f4ce406a8a415cfcc2a075971c234885e3cc2c504120fe. Mar 17 17:46:21.516486 systemd[1]: Started cri-containerd-47acd5e04ff22c34b33bf3746b53ef63cec70d5a407c3d5522b4298a122d3b32.scope - libcontainer container 47acd5e04ff22c34b33bf3746b53ef63cec70d5a407c3d5522b4298a122d3b32. Mar 17 17:46:21.530455 containerd[1511]: time="2025-03-17T17:46:21.530391487Z" level=info msg="StartContainer for \"6f7a77104ee31aaa62ee7fe236b551acff55d3d822516db6ac557dcb7aee8cb8\" returns successfully" Mar 17 17:46:21.586864 containerd[1511]: time="2025-03-17T17:46:21.586792206Z" level=info msg="StartContainer for \"29d1b10342aa35f114f4ce406a8a415cfcc2a075971c234885e3cc2c504120fe\" returns successfully" Mar 17 17:46:21.603792 containerd[1511]: time="2025-03-17T17:46:21.602620713Z" level=info msg="StartContainer for \"47acd5e04ff22c34b33bf3746b53ef63cec70d5a407c3d5522b4298a122d3b32\" returns successfully" Mar 17 17:46:21.977820 kubelet[2266]: I0317 17:46:21.977766 2266 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:22.184753 kubelet[2266]: E0317 17:46:22.184721 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:22.188433 kubelet[2266]: E0317 17:46:22.188336 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:22.191819 kubelet[2266]: E0317 17:46:22.191556 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:22.191819 kubelet[2266]: E0317 17:46:22.191759 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:22.194856 kubelet[2266]: E0317 17:46:22.194815 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:22.195035 kubelet[2266]: E0317 17:46:22.195008 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:22.657855 update_engine[1497]: I20250317 17:46:22.657756 1497 update_attempter.cc:509] Updating boot flags... Mar 17 17:46:22.758738 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2559) Mar 17 17:46:22.843736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2557) Mar 17 17:46:23.198217 kubelet[2266]: E0317 17:46:23.198180 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:23.198867 kubelet[2266]: E0317 17:46:23.198306 2266 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 17:46:23.198867 kubelet[2266]: E0317 17:46:23.198374 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:23.198867 kubelet[2266]: E0317 17:46:23.198502 2266 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:23.569267 kubelet[2266]: I0317 17:46:23.569087 2266 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:46:23.569267 kubelet[2266]: E0317 17:46:23.569133 2266 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:46:23.626535 kubelet[2266]: I0317 17:46:23.626477 2266 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:23.686223 kubelet[2266]: E0317 17:46:23.686151 2266 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:23.686223 kubelet[2266]: I0317 17:46:23.686203 2266 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:23.688075 kubelet[2266]: E0317 17:46:23.688045 2266 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:23.688075 kubelet[2266]: I0317 17:46:23.688069 2266 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:23.689839 kubelet[2266]: E0317 17:46:23.689784 2266 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:24.115870 kubelet[2266]: I0317 17:46:24.115747 2266 apiserver.go:52] "Watching apiserver" Mar 17 17:46:24.125262 kubelet[2266]: I0317 17:46:24.125197 2266 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:46:26.452356 systemd[1]: Reload requested from client PID 2567 ('systemctl') (unit session-7.scope)... Mar 17 17:46:26.452372 systemd[1]: Reloading... Mar 17 17:46:26.600829 zram_generator::config[2611]: No configuration found. Mar 17 17:46:26.768439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:46:26.913736 systemd[1]: Reloading finished in 460 ms. Mar 17 17:46:26.936519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:26.957004 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:46:26.957494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:26.957583 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 128.1M memory peak. Mar 17 17:46:26.966337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:46:27.175789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:46:27.187264 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:46:27.232326 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:27.232326 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:46:27.232326 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:46:27.232922 kubelet[2656]: I0317 17:46:27.232391 2656 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:46:27.240644 kubelet[2656]: I0317 17:46:27.240590 2656 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:46:27.240644 kubelet[2656]: I0317 17:46:27.240627 2656 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:46:27.240957 kubelet[2656]: I0317 17:46:27.240932 2656 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:46:27.242220 kubelet[2656]: I0317 17:46:27.242190 2656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:46:27.244188 kubelet[2656]: I0317 17:46:27.244146 2656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:46:27.247099 kubelet[2656]: E0317 17:46:27.247027 2656 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:46:27.247099 kubelet[2656]: I0317 17:46:27.247094 2656 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:46:27.253083 kubelet[2656]: I0317 17:46:27.253033 2656 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:46:27.253895 kubelet[2656]: I0317 17:46:27.253836 2656 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:46:27.254117 kubelet[2656]: I0317 17:46:27.253889 2656 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:46:27.254203 kubelet[2656]: I0317 17:46:27.254126 2656 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:46:27.254203 kubelet[2656]: I0317 17:46:27.254138 2656 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:46:27.254203 kubelet[2656]: I0317 17:46:27.254198 2656 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:27.254426 kubelet[2656]: I0317 17:46:27.254412 2656 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:46:27.254471 kubelet[2656]: I0317 17:46:27.254432 2656 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:46:27.254471 kubelet[2656]: I0317 17:46:27.254455 2656 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:46:27.254471 kubelet[2656]: I0317 17:46:27.254468 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:46:27.257731 kubelet[2656]: I0317 17:46:27.255154 2656 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:46:27.257731 kubelet[2656]: I0317 17:46:27.255559 2656 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:46:27.257731 kubelet[2656]: I0317 17:46:27.256002 2656 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:46:27.257731 kubelet[2656]: I0317 17:46:27.256023 2656 server.go:1287] "Started kubelet" Mar 17 17:46:27.257731 kubelet[2656]: I0317 17:46:27.257400 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:46:27.258143 kubelet[2656]: I0317 17:46:27.258103 2656 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:46:27.258239 kubelet[2656]: I0317 17:46:27.258210 2656 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:46:27.258363 kubelet[2656]: I0317 17:46:27.258340 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:46:27.261710 kubelet[2656]: I0317 17:46:27.258940 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:46:27.261889 kubelet[2656]: E0317 17:46:27.260598 2656 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:46:27.265719 kubelet[2656]: E0317 17:46:27.260659 2656 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:46:27.265719 kubelet[2656]: I0317 17:46:27.260686 2656 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:46:27.265719 kubelet[2656]: I0317 17:46:27.260707 2656 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:46:27.265719 kubelet[2656]: I0317 17:46:27.261566 2656 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:46:27.265719 kubelet[2656]: I0317 17:46:27.262383 2656 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:46:27.265719 kubelet[2656]: I0317 17:46:27.262404 2656 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:46:27.265998 kubelet[2656]: I0317 17:46:27.265980 2656 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:46:27.266484 kubelet[2656]: I0317 17:46:27.266452 2656 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:46:27.273009 kubelet[2656]: I0317 17:46:27.272358 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:46:27.276654 kubelet[2656]: I0317 17:46:27.276603 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:46:27.276654 kubelet[2656]: I0317 17:46:27.276653 2656 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:46:27.276812 kubelet[2656]: I0317 17:46:27.276681 2656 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:46:27.276812 kubelet[2656]: I0317 17:46:27.276691 2656 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:46:27.276812 kubelet[2656]: E0317 17:46:27.276801 2656 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:46:27.308053 kubelet[2656]: I0317 17:46:27.308014 2656 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:46:27.308053 kubelet[2656]: I0317 17:46:27.308033 2656 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:46:27.308053 kubelet[2656]: I0317 17:46:27.308054 2656 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:46:27.308261 kubelet[2656]: I0317 17:46:27.308242 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:46:27.308288 kubelet[2656]: I0317 17:46:27.308258 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:46:27.308288 kubelet[2656]: I0317 17:46:27.308278 2656 policy_none.go:49] "None policy: Start" Mar 17 17:46:27.308331 kubelet[2656]: I0317 17:46:27.308289 2656 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:46:27.308331 kubelet[2656]: I0317 17:46:27.308301 2656 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:46:27.308420 kubelet[2656]: I0317 17:46:27.308399 2656 state_mem.go:75] "Updated machine memory state" Mar 17 17:46:27.312719 kubelet[2656]: I0317 17:46:27.312563 2656 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:46:27.312841 kubelet[2656]: I0317 17:46:27.312828 2656 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:46:27.312926 kubelet[2656]: I0317 17:46:27.312891 2656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:46:27.314259 kubelet[2656]: E0317 17:46:27.313751 2656 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:46:27.314645 kubelet[2656]: I0317 17:46:27.314597 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:46:27.378259 kubelet[2656]: I0317 17:46:27.378213 2656 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:27.378441 kubelet[2656]: I0317 17:46:27.378390 2656 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:27.381195 kubelet[2656]: I0317 17:46:27.378570 2656 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.400398 sudo[2688]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:46:27.400791 sudo[2688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:46:27.421739 kubelet[2656]: I0317 17:46:27.420023 2656 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 17:46:27.429321 kubelet[2656]: I0317 17:46:27.428387 2656 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 17 17:46:27.429321 kubelet[2656]: I0317 17:46:27.428473 2656 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 17:46:27.466985 kubelet[2656]: I0317 17:46:27.466918 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:27.466985 kubelet[2656]: I0317 17:46:27.466973 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.467185 kubelet[2656]: I0317 17:46:27.467006 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.467185 kubelet[2656]: I0317 17:46:27.467030 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:27.467185 kubelet[2656]: I0317 17:46:27.467052 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:27.467185 kubelet[2656]: I0317 17:46:27.467071 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88878094edf6e0e5e19e2253a422994b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88878094edf6e0e5e19e2253a422994b\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:27.467185 kubelet[2656]: I0317 17:46:27.467103 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.467325 kubelet[2656]: I0317 17:46:27.467123 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.467325 kubelet[2656]: I0317 17:46:27.467141 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:46:27.684188 kubelet[2656]: E0317 17:46:27.684039 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:27.686152 kubelet[2656]: E0317 17:46:27.686075 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:27.686297 kubelet[2656]: E0317 17:46:27.686181 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:27.929232 sudo[2688]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:28.255623 kubelet[2656]: I0317 17:46:28.255546 2656 apiserver.go:52] "Watching apiserver" Mar 17 17:46:28.263270 kubelet[2656]: I0317 17:46:28.263211 2656 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:46:28.288966 kubelet[2656]: I0317 17:46:28.288923 2656 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:28.289184 kubelet[2656]: E0317 17:46:28.289067 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:28.289532 kubelet[2656]: I0317 17:46:28.289512 2656 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:28.304213 kubelet[2656]: E0317 17:46:28.304133 2656 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:46:28.304397 kubelet[2656]: E0317 17:46:28.304153 2656 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 17:46:28.304565 kubelet[2656]: E0317 17:46:28.304540 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:28.304741 kubelet[2656]: E0317 17:46:28.304722 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:28.331037 kubelet[2656]: I0317 17:46:28.330944 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.33091183 podStartE2EDuration="1.33091183s" podCreationTimestamp="2025-03-17 17:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:28.319735956 +0000 UTC m=+1.124401969" watchObservedRunningTime="2025-03-17 17:46:28.33091183 +0000 UTC m=+1.135577844" Mar 17 17:46:28.331230 kubelet[2656]: I0317 17:46:28.331118 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3311111279999999 podStartE2EDuration="1.331111128s" podCreationTimestamp="2025-03-17 17:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:28.330907512 +0000 UTC m=+1.135573525" watchObservedRunningTime="2025-03-17 17:46:28.331111128 +0000 UTC m=+1.135777141" Mar 17 17:46:28.352940 kubelet[2656]: I0317 17:46:28.352795 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.352768899 podStartE2EDuration="1.352768899s" podCreationTimestamp="2025-03-17 17:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:28.342298938 +0000 UTC m=+1.146964951" watchObservedRunningTime="2025-03-17 17:46:28.352768899 +0000 UTC m=+1.157434912" Mar 17 17:46:29.290136 kubelet[2656]: E0317 17:46:29.290099 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:29.290775 kubelet[2656]: E0317 17:46:29.290437 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:29.561007 sudo[1694]: pam_unix(sudo:session): session closed for user root Mar 17 17:46:29.562926 sshd[1693]: Connection closed by 10.0.0.1 port 59076 Mar 17 17:46:29.564416 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:29.570076 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:59076.service: Deactivated successfully. Mar 17 17:46:29.572659 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:46:29.572918 systemd[1]: session-7.scope: Consumed 5.911s CPU time, 259.2M memory peak. Mar 17 17:46:29.574199 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:46:29.575258 systemd-logind[1491]: Removed session 7. Mar 17 17:46:29.772322 kubelet[2656]: E0317 17:46:29.772285 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:30.291711 kubelet[2656]: E0317 17:46:30.291671 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:31.548216 kubelet[2656]: I0317 17:46:31.548175 2656 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:46:31.548742 containerd[1511]: time="2025-03-17T17:46:31.548526369Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:46:31.549117 kubelet[2656]: I0317 17:46:31.548800 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:46:32.519843 systemd[1]: Created slice kubepods-besteffort-podc7fe0143_f236_48b0_a7c5_b2d3ee0f5fbe.slice - libcontainer container kubepods-besteffort-podc7fe0143_f236_48b0_a7c5_b2d3ee0f5fbe.slice. Mar 17 17:46:32.529028 kubelet[2656]: W0317 17:46:32.528952 2656 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:46:32.529028 kubelet[2656]: E0317 17:46:32.529032 2656 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 17:46:32.529975 kubelet[2656]: W0317 17:46:32.529929 2656 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 17:46:32.530198 kubelet[2656]: E0317 17:46:32.530154 2656 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 17:46:32.540049 systemd[1]: Created slice kubepods-burstable-pod5da62c3a_9819_43c6_8ea8_afe9d4a3bca9.slice - libcontainer container kubepods-burstable-pod5da62c3a_9819_43c6_8ea8_afe9d4a3bca9.slice. Mar 17 17:46:32.591600 systemd[1]: Created slice kubepods-besteffort-pod40898f7e_7a49_4b36_acd7_e0548b3d94d2.slice - libcontainer container kubepods-besteffort-pod40898f7e_7a49_4b36_acd7_e0548b3d94d2.slice. Mar 17 17:46:32.601916 kubelet[2656]: I0317 17:46:32.601866 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cni-path\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.601916 kubelet[2656]: I0317 17:46:32.601914 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-xtables-lock\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602479 kubelet[2656]: I0317 17:46:32.601936 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-cgroup\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602479 kubelet[2656]: I0317 17:46:32.601952 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-config-path\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602479 kubelet[2656]: I0317 17:46:32.601968 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcld\" (UniqueName: \"kubernetes.io/projected/40898f7e-7a49-4b36-acd7-e0548b3d94d2-kube-api-access-gwcld\") pod \"cilium-operator-6c4d7847fc-xhkss\" (UID: \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\") " pod="kube-system/cilium-operator-6c4d7847fc-xhkss" Mar 17 17:46:32.602479 kubelet[2656]: I0317 17:46:32.601986 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe-xtables-lock\") pod \"kube-proxy-tq75v\" (UID: \"c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe\") " pod="kube-system/kube-proxy-tq75v" Mar 17 17:46:32.602479 kubelet[2656]: I0317 17:46:32.602011 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe-lib-modules\") pod \"kube-proxy-tq75v\" (UID: \"c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe\") " pod="kube-system/kube-proxy-tq75v" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602039 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-bpf-maps\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602065 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40898f7e-7a49-4b36-acd7-e0548b3d94d2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xhkss\" (UID: \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\") " pod="kube-system/cilium-operator-6c4d7847fc-xhkss" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602084 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-run\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602104 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hostproc\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602123 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-lib-modules\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602639 kubelet[2656]: I0317 17:46:32.602144 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hubble-tls\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602859 kubelet[2656]: I0317 17:46:32.602179 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe-kube-proxy\") pod \"kube-proxy-tq75v\" (UID: \"c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe\") " pod="kube-system/kube-proxy-tq75v" Mar 17 17:46:32.602859 kubelet[2656]: I0317 17:46:32.602220 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wbk\" (UniqueName: \"kubernetes.io/projected/c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe-kube-api-access-h5wbk\") pod \"kube-proxy-tq75v\" (UID: \"c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe\") " pod="kube-system/kube-proxy-tq75v" Mar 17 17:46:32.602859 kubelet[2656]: I0317 17:46:32.602242 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-net\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602859 kubelet[2656]: I0317 17:46:32.602262 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-kernel\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.602859 kubelet[2656]: I0317 17:46:32.602287 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5fx\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-kube-api-access-nq5fx\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.603008 kubelet[2656]: I0317 17:46:32.602309 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-etc-cni-netd\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.603008 kubelet[2656]: I0317 17:46:32.602329 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-clustermesh-secrets\") pod \"cilium-d7bwh\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " pod="kube-system/cilium-d7bwh" Mar 17 17:46:32.832329 kubelet[2656]: E0317 17:46:32.832279 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:32.833112 containerd[1511]: time="2025-03-17T17:46:32.833064052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tq75v,Uid:c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:32.862563 containerd[1511]: time="2025-03-17T17:46:32.862442614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:32.862563 containerd[1511]: time="2025-03-17T17:46:32.862512154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:32.862563 containerd[1511]: time="2025-03-17T17:46:32.862527725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:32.863049 containerd[1511]: time="2025-03-17T17:46:32.862943775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:32.887982 systemd[1]: Started cri-containerd-96720b2e6f031b310d2ad8e1c9466c008d9930fb47c4b1fa4c53435748c5be54.scope - libcontainer container 96720b2e6f031b310d2ad8e1c9466c008d9930fb47c4b1fa4c53435748c5be54. Mar 17 17:46:32.897224 kubelet[2656]: E0317 17:46:32.896801 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:32.897553 containerd[1511]: time="2025-03-17T17:46:32.897504264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xhkss,Uid:40898f7e-7a49-4b36-acd7-e0548b3d94d2,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:32.922784 containerd[1511]: time="2025-03-17T17:46:32.922689365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tq75v,Uid:c7fe0143-f236-48b0-a7c5-b2d3ee0f5fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"96720b2e6f031b310d2ad8e1c9466c008d9930fb47c4b1fa4c53435748c5be54\"" Mar 17 17:46:32.923902 kubelet[2656]: E0317 17:46:32.923836 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:32.927538 containerd[1511]: time="2025-03-17T17:46:32.927256231Z" level=info msg="CreateContainer within sandbox \"96720b2e6f031b310d2ad8e1c9466c008d9930fb47c4b1fa4c53435748c5be54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:46:32.938629 containerd[1511]: time="2025-03-17T17:46:32.938483596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:32.938629 containerd[1511]: time="2025-03-17T17:46:32.938569689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:32.938629 containerd[1511]: time="2025-03-17T17:46:32.938600481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:32.939816 containerd[1511]: time="2025-03-17T17:46:32.939680842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:32.960749 containerd[1511]: time="2025-03-17T17:46:32.959192554Z" level=info msg="CreateContainer within sandbox \"96720b2e6f031b310d2ad8e1c9466c008d9930fb47c4b1fa4c53435748c5be54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b1efc9b14634d30aae5e2be00c36c4591befb466ce927cf8440bdae08139d85\"" Mar 17 17:46:32.961845 containerd[1511]: time="2025-03-17T17:46:32.961817070Z" level=info msg="StartContainer for \"3b1efc9b14634d30aae5e2be00c36c4591befb466ce927cf8440bdae08139d85\"" Mar 17 17:46:32.976787 systemd[1]: Started cri-containerd-e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731.scope - libcontainer container e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731. Mar 17 17:46:33.012976 systemd[1]: Started cri-containerd-3b1efc9b14634d30aae5e2be00c36c4591befb466ce927cf8440bdae08139d85.scope - libcontainer container 3b1efc9b14634d30aae5e2be00c36c4591befb466ce927cf8440bdae08139d85. Mar 17 17:46:33.040132 containerd[1511]: time="2025-03-17T17:46:33.040058987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xhkss,Uid:40898f7e-7a49-4b36-acd7-e0548b3d94d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\"" Mar 17 17:46:33.041118 kubelet[2656]: E0317 17:46:33.041090 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:33.044263 containerd[1511]: time="2025-03-17T17:46:33.044113926Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:46:33.058671 containerd[1511]: time="2025-03-17T17:46:33.058606898Z" level=info msg="StartContainer for \"3b1efc9b14634d30aae5e2be00c36c4591befb466ce927cf8440bdae08139d85\" returns successfully" Mar 17 17:46:33.299941 kubelet[2656]: E0317 17:46:33.299779 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:33.371817 kubelet[2656]: I0317 17:46:33.371743 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tq75v" podStartSLOduration=1.371716726 podStartE2EDuration="1.371716726s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:46:33.371130556 +0000 UTC m=+6.175796589" watchObservedRunningTime="2025-03-17 17:46:33.371716726 +0000 UTC m=+6.176382739" Mar 17 17:46:33.746208 kubelet[2656]: E0317 17:46:33.746133 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:33.746895 containerd[1511]: time="2025-03-17T17:46:33.746831024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7bwh,Uid:5da62c3a-9819-43c6-8ea8-afe9d4a3bca9,Namespace:kube-system,Attempt:0,}" Mar 17 17:46:34.166622 containerd[1511]: time="2025-03-17T17:46:34.165514409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:46:34.166622 containerd[1511]: time="2025-03-17T17:46:34.166349315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:46:34.166622 containerd[1511]: time="2025-03-17T17:46:34.166368333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:34.166622 containerd[1511]: time="2025-03-17T17:46:34.166487902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:46:34.197116 systemd[1]: Started cri-containerd-a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad.scope - libcontainer container a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad. Mar 17 17:46:34.228896 containerd[1511]: time="2025-03-17T17:46:34.228837341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7bwh,Uid:5da62c3a-9819-43c6-8ea8-afe9d4a3bca9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\"" Mar 17 17:46:34.229808 kubelet[2656]: E0317 17:46:34.229780 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:34.553716 kubelet[2656]: E0317 17:46:34.553524 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:34.950048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244856866.mount: Deactivated successfully. Mar 17 17:46:35.305001 kubelet[2656]: E0317 17:46:35.304802 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:36.307409 kubelet[2656]: E0317 17:46:36.307335 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:36.641652 kubelet[2656]: E0317 17:46:36.641612 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:37.308941 kubelet[2656]: E0317 17:46:37.308892 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:37.994548 containerd[1511]: time="2025-03-17T17:46:37.994392730Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:37.995676 containerd[1511]: time="2025-03-17T17:46:37.995601168Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:46:37.998973 containerd[1511]: time="2025-03-17T17:46:37.998651596Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:38.000390 containerd[1511]: time="2025-03-17T17:46:38.000283417Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.956098319s" Mar 17 17:46:38.000390 containerd[1511]: time="2025-03-17T17:46:38.000359288Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:46:38.001815 containerd[1511]: time="2025-03-17T17:46:38.001747995Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:46:38.003398 containerd[1511]: time="2025-03-17T17:46:38.003362955Z" level=info msg="CreateContainer within sandbox \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:46:38.035197 containerd[1511]: time="2025-03-17T17:46:38.035077414Z" level=info msg="CreateContainer within sandbox \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\"" Mar 17 17:46:38.037175 containerd[1511]: time="2025-03-17T17:46:38.036743657Z" level=info msg="StartContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\"" Mar 17 17:46:38.079883 systemd[1]: Started cri-containerd-3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd.scope - libcontainer container 3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd. Mar 17 17:46:39.056968 containerd[1511]: time="2025-03-17T17:46:39.056807542Z" level=info msg="StartContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" returns successfully" Mar 17 17:46:39.060733 kubelet[2656]: E0317 17:46:39.060683 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:39.779985 kubelet[2656]: E0317 17:46:39.779942 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:40.064313 kubelet[2656]: E0317 17:46:40.064092 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:40.319162 kubelet[2656]: I0317 17:46:40.318449 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xhkss" podStartSLOduration=3.359273189 podStartE2EDuration="8.318416281s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="2025-03-17 17:46:33.042257851 +0000 UTC m=+5.846923864" lastFinishedPulling="2025-03-17 17:46:38.001400923 +0000 UTC m=+10.806066956" observedRunningTime="2025-03-17 17:46:40.318157789 +0000 UTC m=+13.122823822" watchObservedRunningTime="2025-03-17 17:46:40.318416281 +0000 UTC m=+13.123082294" Mar 17 17:46:41.065772 kubelet[2656]: E0317 17:46:41.065736 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:48.795716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974265489.mount: Deactivated successfully. Mar 17 17:46:53.379625 containerd[1511]: time="2025-03-17T17:46:53.379529524Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:53.383024 containerd[1511]: time="2025-03-17T17:46:53.382888042Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:46:53.385629 containerd[1511]: time="2025-03-17T17:46:53.385526105Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:46:53.387192 containerd[1511]: time="2025-03-17T17:46:53.387112538Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.385308222s" Mar 17 17:46:53.387192 containerd[1511]: time="2025-03-17T17:46:53.387177565Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:46:53.389866 containerd[1511]: time="2025-03-17T17:46:53.389805538Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:46:54.610930 containerd[1511]: time="2025-03-17T17:46:54.610843791Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\"" Mar 17 17:46:54.611638 containerd[1511]: time="2025-03-17T17:46:54.611527343Z" level=info msg="StartContainer for \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\"" Mar 17 17:46:54.649047 systemd[1]: Started cri-containerd-863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf.scope - libcontainer container 863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf. Mar 17 17:46:54.764418 systemd[1]: cri-containerd-863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf.scope: Deactivated successfully. Mar 17 17:46:54.932813 containerd[1511]: time="2025-03-17T17:46:54.932461833Z" level=info msg="StartContainer for \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\" returns successfully" Mar 17 17:46:54.956775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf-rootfs.mount: Deactivated successfully. Mar 17 17:46:55.126786 kubelet[2656]: E0317 17:46:55.126744 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:55.180112 containerd[1511]: time="2025-03-17T17:46:55.180012998Z" level=info msg="shim disconnected" id=863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf namespace=k8s.io Mar 17 17:46:55.180112 containerd[1511]: time="2025-03-17T17:46:55.180091812Z" level=warning msg="cleaning up after shim disconnected" id=863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf namespace=k8s.io Mar 17 17:46:55.180112 containerd[1511]: time="2025-03-17T17:46:55.180103054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:55.398004 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:45114.service - OpenSSH per-connection server daemon (10.0.0.1:45114). Mar 17 17:46:55.484172 sshd[3177]: Accepted publickey for core from 10.0.0.1 port 45114 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:46:55.486376 sshd-session[3177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:46:55.492094 systemd-logind[1491]: New session 8 of user core. Mar 17 17:46:55.498892 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:46:55.684989 sshd[3179]: Connection closed by 10.0.0.1 port 45114 Mar 17 17:46:55.685278 sshd-session[3177]: pam_unix(sshd:session): session closed for user core Mar 17 17:46:55.688850 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:45114.service: Deactivated successfully. Mar 17 17:46:55.691475 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:46:55.693859 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:46:55.695261 systemd-logind[1491]: Removed session 8. Mar 17 17:46:56.129441 kubelet[2656]: E0317 17:46:56.129396 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:56.131938 containerd[1511]: time="2025-03-17T17:46:56.131889116Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:46:56.168050 containerd[1511]: time="2025-03-17T17:46:56.167987290Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\"" Mar 17 17:46:56.169939 containerd[1511]: time="2025-03-17T17:46:56.169901935Z" level=info msg="StartContainer for \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\"" Mar 17 17:46:56.230894 systemd[1]: Started cri-containerd-70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9.scope - libcontainer container 70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9. Mar 17 17:46:56.260259 containerd[1511]: time="2025-03-17T17:46:56.260209385Z" level=info msg="StartContainer for \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\" returns successfully" Mar 17 17:46:56.275237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:46:56.275565 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:56.275809 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:56.285196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:46:56.287885 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:46:56.288548 systemd[1]: cri-containerd-70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9.scope: Deactivated successfully. Mar 17 17:46:56.305475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:46:56.319280 containerd[1511]: time="2025-03-17T17:46:56.319180245Z" level=info msg="shim disconnected" id=70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9 namespace=k8s.io Mar 17 17:46:56.319280 containerd[1511]: time="2025-03-17T17:46:56.319250922Z" level=warning msg="cleaning up after shim disconnected" id=70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9 namespace=k8s.io Mar 17 17:46:56.319280 containerd[1511]: time="2025-03-17T17:46:56.319264869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:57.135372 kubelet[2656]: E0317 17:46:57.135249 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:57.142232 containerd[1511]: time="2025-03-17T17:46:57.141953089Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:46:57.159791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9-rootfs.mount: Deactivated successfully. Mar 17 17:46:57.162718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617250978.mount: Deactivated successfully. Mar 17 17:46:57.165858 containerd[1511]: time="2025-03-17T17:46:57.165789280Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\"" Mar 17 17:46:57.166531 containerd[1511]: time="2025-03-17T17:46:57.166461636Z" level=info msg="StartContainer for \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\"" Mar 17 17:46:57.204908 systemd[1]: Started cri-containerd-675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37.scope - libcontainer container 675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37. Mar 17 17:46:57.241290 systemd[1]: cri-containerd-675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37.scope: Deactivated successfully. Mar 17 17:46:57.270722 containerd[1511]: time="2025-03-17T17:46:57.270664471Z" level=info msg="StartContainer for \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\" returns successfully" Mar 17 17:46:57.293229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37-rootfs.mount: Deactivated successfully. Mar 17 17:46:57.298399 containerd[1511]: time="2025-03-17T17:46:57.298323574Z" level=info msg="shim disconnected" id=675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37 namespace=k8s.io Mar 17 17:46:57.298399 containerd[1511]: time="2025-03-17T17:46:57.298399693Z" level=warning msg="cleaning up after shim disconnected" id=675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37 namespace=k8s.io Mar 17 17:46:57.298631 containerd[1511]: time="2025-03-17T17:46:57.298411446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:46:58.139462 kubelet[2656]: E0317 17:46:58.139416 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:58.141969 containerd[1511]: time="2025-03-17T17:46:58.141923022Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:46:58.599116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060969700.mount: Deactivated successfully. Mar 17 17:46:58.837332 containerd[1511]: time="2025-03-17T17:46:58.837275387Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\"" Mar 17 17:46:58.837921 containerd[1511]: time="2025-03-17T17:46:58.837880173Z" level=info msg="StartContainer for \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\"" Mar 17 17:46:58.865850 systemd[1]: Started cri-containerd-c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a.scope - libcontainer container c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a. Mar 17 17:46:58.890263 systemd[1]: cri-containerd-c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a.scope: Deactivated successfully. Mar 17 17:46:59.006971 containerd[1511]: time="2025-03-17T17:46:59.006900296Z" level=info msg="StartContainer for \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\" returns successfully" Mar 17 17:46:59.026347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a-rootfs.mount: Deactivated successfully. Mar 17 17:46:59.142926 kubelet[2656]: E0317 17:46:59.142799 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:46:59.195614 containerd[1511]: time="2025-03-17T17:46:59.195540211Z" level=info msg="shim disconnected" id=c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a namespace=k8s.io Mar 17 17:46:59.195614 containerd[1511]: time="2025-03-17T17:46:59.195608883Z" level=warning msg="cleaning up after shim disconnected" id=c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a namespace=k8s.io Mar 17 17:46:59.195614 containerd[1511]: time="2025-03-17T17:46:59.195617931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:47:00.148196 kubelet[2656]: E0317 17:47:00.148143 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:00.151519 containerd[1511]: time="2025-03-17T17:47:00.150289247Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:47:00.180963 containerd[1511]: time="2025-03-17T17:47:00.180897005Z" level=info msg="CreateContainer within sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\"" Mar 17 17:47:00.181568 containerd[1511]: time="2025-03-17T17:47:00.181481759Z" level=info msg="StartContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\"" Mar 17 17:47:00.211959 systemd[1]: Started cri-containerd-5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a.scope - libcontainer container 5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a. Mar 17 17:47:00.247160 containerd[1511]: time="2025-03-17T17:47:00.247089859Z" level=info msg="StartContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" returns successfully" Mar 17 17:47:00.369961 kubelet[2656]: I0317 17:47:00.369873 2656 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:47:00.428842 systemd[1]: Created slice kubepods-burstable-pod7b83d474_b867_4cbc_b698_ae806a70b1ae.slice - libcontainer container kubepods-burstable-pod7b83d474_b867_4cbc_b698_ae806a70b1ae.slice. Mar 17 17:47:00.437956 systemd[1]: Created slice kubepods-burstable-pod108c4b8e_bad4_474e_89a6_5c427847f297.slice - libcontainer container kubepods-burstable-pod108c4b8e_bad4_474e_89a6_5c427847f297.slice. Mar 17 17:47:00.545937 kubelet[2656]: I0317 17:47:00.545876 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w6sl\" (UniqueName: \"kubernetes.io/projected/108c4b8e-bad4-474e-89a6-5c427847f297-kube-api-access-6w6sl\") pod \"coredns-668d6bf9bc-hc4gd\" (UID: \"108c4b8e-bad4-474e-89a6-5c427847f297\") " pod="kube-system/coredns-668d6bf9bc-hc4gd" Mar 17 17:47:00.545937 kubelet[2656]: I0317 17:47:00.545947 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcw4l\" (UniqueName: \"kubernetes.io/projected/7b83d474-b867-4cbc-b698-ae806a70b1ae-kube-api-access-hcw4l\") pod \"coredns-668d6bf9bc-kfz4g\" (UID: \"7b83d474-b867-4cbc-b698-ae806a70b1ae\") " pod="kube-system/coredns-668d6bf9bc-kfz4g" Mar 17 17:47:00.546162 kubelet[2656]: I0317 17:47:00.545975 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b83d474-b867-4cbc-b698-ae806a70b1ae-config-volume\") pod \"coredns-668d6bf9bc-kfz4g\" (UID: \"7b83d474-b867-4cbc-b698-ae806a70b1ae\") " pod="kube-system/coredns-668d6bf9bc-kfz4g" Mar 17 17:47:00.546162 kubelet[2656]: I0317 17:47:00.546009 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/108c4b8e-bad4-474e-89a6-5c427847f297-config-volume\") pod \"coredns-668d6bf9bc-hc4gd\" (UID: \"108c4b8e-bad4-474e-89a6-5c427847f297\") " pod="kube-system/coredns-668d6bf9bc-hc4gd" Mar 17 17:47:00.698839 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:56432.service - OpenSSH per-connection server daemon (10.0.0.1:56432). Mar 17 17:47:00.734053 kubelet[2656]: E0317 17:47:00.734020 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:00.735141 containerd[1511]: time="2025-03-17T17:47:00.735082720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kfz4g,Uid:7b83d474-b867-4cbc-b698-ae806a70b1ae,Namespace:kube-system,Attempt:0,}" Mar 17 17:47:00.737980 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 56432 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:00.739978 sshd-session[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:00.742716 kubelet[2656]: E0317 17:47:00.742352 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:00.743893 containerd[1511]: time="2025-03-17T17:47:00.743861449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc4gd,Uid:108c4b8e-bad4-474e-89a6-5c427847f297,Namespace:kube-system,Attempt:0,}" Mar 17 17:47:00.746811 systemd-logind[1491]: New session 9 of user core. Mar 17 17:47:00.754859 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:47:00.992319 sshd[3471]: Connection closed by 10.0.0.1 port 56432 Mar 17 17:47:00.993128 sshd-session[3468]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:00.998927 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:56432.service: Deactivated successfully. Mar 17 17:47:01.001999 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:47:01.003057 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:47:01.004646 systemd-logind[1491]: Removed session 9. Mar 17 17:47:01.153123 kubelet[2656]: E0317 17:47:01.153083 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:02.294510 kubelet[2656]: E0317 17:47:02.294461 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:02.836890 systemd-networkd[1422]: cilium_host: Link UP Mar 17 17:47:02.837077 systemd-networkd[1422]: cilium_net: Link UP Mar 17 17:47:02.837082 systemd-networkd[1422]: cilium_net: Gained carrier Mar 17 17:47:02.837322 systemd-networkd[1422]: cilium_host: Gained carrier Mar 17 17:47:02.958952 systemd-networkd[1422]: cilium_vxlan: Link UP Mar 17 17:47:02.958966 systemd-networkd[1422]: cilium_vxlan: Gained carrier Mar 17 17:47:03.223748 kernel: NET: Registered PF_ALG protocol family Mar 17 17:47:03.295748 kubelet[2656]: E0317 17:47:03.295682 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:03.378890 systemd-networkd[1422]: cilium_net: Gained IPv6LL Mar 17 17:47:03.634875 systemd-networkd[1422]: cilium_host: Gained IPv6LL Mar 17 17:47:04.062122 systemd-networkd[1422]: lxc_health: Link UP Mar 17 17:47:04.063533 systemd-networkd[1422]: lxc_health: Gained carrier Mar 17 17:47:04.600746 kernel: eth0: renamed from tmp1791f Mar 17 17:47:04.608933 systemd-networkd[1422]: lxcb6a96c79176a: Link UP Mar 17 17:47:04.609243 systemd-networkd[1422]: lxcb6a96c79176a: Gained carrier Mar 17 17:47:04.626037 kernel: eth0: renamed from tmp3948d Mar 17 17:47:04.632656 systemd-networkd[1422]: lxc55844c545692: Link UP Mar 17 17:47:04.633542 systemd-networkd[1422]: lxc55844c545692: Gained carrier Mar 17 17:47:04.915557 systemd-networkd[1422]: cilium_vxlan: Gained IPv6LL Mar 17 17:47:05.362936 systemd-networkd[1422]: lxc_health: Gained IPv6LL Mar 17 17:47:05.748306 kubelet[2656]: E0317 17:47:05.747901 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:05.922743 kubelet[2656]: I0317 17:47:05.921800 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d7bwh" podStartSLOduration=14.764006313 podStartE2EDuration="33.921769488s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="2025-03-17 17:46:34.230452613 +0000 UTC m=+7.035118637" lastFinishedPulling="2025-03-17 17:46:53.388215799 +0000 UTC m=+26.192881812" observedRunningTime="2025-03-17 17:47:01.176246756 +0000 UTC m=+33.980912789" watchObservedRunningTime="2025-03-17 17:47:05.921769488 +0000 UTC m=+38.726435501" Mar 17 17:47:06.009428 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:58316.service - OpenSSH per-connection server daemon (10.0.0.1:58316). Mar 17 17:47:06.056737 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 58316 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:06.057741 sshd-session[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:06.067821 systemd-networkd[1422]: lxcb6a96c79176a: Gained IPv6LL Mar 17 17:47:06.088475 systemd-logind[1491]: New session 10 of user core. Mar 17 17:47:06.103012 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:47:06.301905 kubelet[2656]: E0317 17:47:06.301770 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:06.578836 systemd-networkd[1422]: lxc55844c545692: Gained IPv6LL Mar 17 17:47:06.855152 sshd[3925]: Connection closed by 10.0.0.1 port 58316 Mar 17 17:47:06.856985 sshd-session[3923]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:06.860985 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:58316.service: Deactivated successfully. Mar 17 17:47:06.863451 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:47:06.864202 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:47:06.865305 systemd-logind[1491]: Removed session 10. Mar 17 17:47:07.320308 kubelet[2656]: E0317 17:47:07.320038 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:08.499497 containerd[1511]: time="2025-03-17T17:47:08.499353586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:47:08.499497 containerd[1511]: time="2025-03-17T17:47:08.499438230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:47:08.499497 containerd[1511]: time="2025-03-17T17:47:08.499455353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:08.500048 containerd[1511]: time="2025-03-17T17:47:08.499570556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:08.528866 systemd[1]: Started cri-containerd-3948dcb26b081784379cac68107970f3831fa6ce2c584050ef0aa37e6276a24e.scope - libcontainer container 3948dcb26b081784379cac68107970f3831fa6ce2c584050ef0aa37e6276a24e. Mar 17 17:47:08.543161 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:47:08.562994 containerd[1511]: time="2025-03-17T17:47:08.562584958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:47:08.563580 containerd[1511]: time="2025-03-17T17:47:08.563513774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:47:08.563580 containerd[1511]: time="2025-03-17T17:47:08.563543342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:08.564083 containerd[1511]: time="2025-03-17T17:47:08.563622696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:47:08.572753 containerd[1511]: time="2025-03-17T17:47:08.572620154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kfz4g,Uid:7b83d474-b867-4cbc-b698-ae806a70b1ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"3948dcb26b081784379cac68107970f3831fa6ce2c584050ef0aa37e6276a24e\"" Mar 17 17:47:08.573533 kubelet[2656]: E0317 17:47:08.573512 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:08.577667 containerd[1511]: time="2025-03-17T17:47:08.577567326Z" level=info msg="CreateContainer within sandbox \"3948dcb26b081784379cac68107970f3831fa6ce2c584050ef0aa37e6276a24e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:47:08.592877 systemd[1]: Started cri-containerd-1791ffd3df957d0dd61c69e2b5719b2cef9e7cd79ea6dab9eb761b54092f05c4.scope - libcontainer container 1791ffd3df957d0dd61c69e2b5719b2cef9e7cd79ea6dab9eb761b54092f05c4. Mar 17 17:47:08.608114 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:47:08.633606 containerd[1511]: time="2025-03-17T17:47:08.633564501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hc4gd,Uid:108c4b8e-bad4-474e-89a6-5c427847f297,Namespace:kube-system,Attempt:0,} returns sandbox id \"1791ffd3df957d0dd61c69e2b5719b2cef9e7cd79ea6dab9eb761b54092f05c4\"" Mar 17 17:47:08.634417 kubelet[2656]: E0317 17:47:08.634383 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:08.636284 containerd[1511]: time="2025-03-17T17:47:08.636242442Z" level=info msg="CreateContainer within sandbox \"1791ffd3df957d0dd61c69e2b5719b2cef9e7cd79ea6dab9eb761b54092f05c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:47:09.261676 containerd[1511]: time="2025-03-17T17:47:09.261596280Z" level=info msg="CreateContainer within sandbox \"3948dcb26b081784379cac68107970f3831fa6ce2c584050ef0aa37e6276a24e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f287c5a39314de3f66d05f7a5454cd43f1eae5da5b411d64ca11f4ea14c94af7\"" Mar 17 17:47:09.262286 containerd[1511]: time="2025-03-17T17:47:09.262237750Z" level=info msg="StartContainer for \"f287c5a39314de3f66d05f7a5454cd43f1eae5da5b411d64ca11f4ea14c94af7\"" Mar 17 17:47:09.291970 systemd[1]: Started cri-containerd-f287c5a39314de3f66d05f7a5454cd43f1eae5da5b411d64ca11f4ea14c94af7.scope - libcontainer container f287c5a39314de3f66d05f7a5454cd43f1eae5da5b411d64ca11f4ea14c94af7. Mar 17 17:47:09.332902 containerd[1511]: time="2025-03-17T17:47:09.332840764Z" level=info msg="CreateContainer within sandbox \"1791ffd3df957d0dd61c69e2b5719b2cef9e7cd79ea6dab9eb761b54092f05c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"889743bafc59a17ceeb55a90bcb8c437dc8378e1db43a1b7c2390931b9d4487c\"" Mar 17 17:47:09.333251 containerd[1511]: time="2025-03-17T17:47:09.333218145Z" level=info msg="StartContainer for \"889743bafc59a17ceeb55a90bcb8c437dc8378e1db43a1b7c2390931b9d4487c\"" Mar 17 17:47:09.364937 systemd[1]: Started cri-containerd-889743bafc59a17ceeb55a90bcb8c437dc8378e1db43a1b7c2390931b9d4487c.scope - libcontainer container 889743bafc59a17ceeb55a90bcb8c437dc8378e1db43a1b7c2390931b9d4487c. Mar 17 17:47:09.529917 containerd[1511]: time="2025-03-17T17:47:09.529674320Z" level=info msg="StartContainer for \"889743bafc59a17ceeb55a90bcb8c437dc8378e1db43a1b7c2390931b9d4487c\" returns successfully" Mar 17 17:47:09.529917 containerd[1511]: time="2025-03-17T17:47:09.529674260Z" level=info msg="StartContainer for \"f287c5a39314de3f66d05f7a5454cd43f1eae5da5b411d64ca11f4ea14c94af7\" returns successfully" Mar 17 17:47:10.330080 kubelet[2656]: E0317 17:47:10.329975 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:10.333171 kubelet[2656]: E0317 17:47:10.333120 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:11.000485 kubelet[2656]: I0317 17:47:11.000400 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hc4gd" podStartSLOduration=39.00037986 podStartE2EDuration="39.00037986s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:47:10.999635551 +0000 UTC m=+43.804301565" watchObservedRunningTime="2025-03-17 17:47:11.00037986 +0000 UTC m=+43.805045873" Mar 17 17:47:11.334635 kubelet[2656]: E0317 17:47:11.334542 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:11.334635 kubelet[2656]: E0317 17:47:11.334616 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:11.869605 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). Mar 17 17:47:11.917128 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:11.919217 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:11.924225 systemd-logind[1491]: New session 11 of user core. Mar 17 17:47:11.934878 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:47:11.956345 kubelet[2656]: I0317 17:47:11.956266 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kfz4g" podStartSLOduration=39.956242931 podStartE2EDuration="39.956242931s" podCreationTimestamp="2025-03-17 17:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:47:11.668658495 +0000 UTC m=+44.473324508" watchObservedRunningTime="2025-03-17 17:47:11.956242931 +0000 UTC m=+44.760908944" Mar 17 17:47:12.245148 sshd[4114]: Connection closed by 10.0.0.1 port 58328 Mar 17 17:47:12.245435 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:12.250296 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:58328.service: Deactivated successfully. Mar 17 17:47:12.252539 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:47:12.253477 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:47:12.254619 systemd-logind[1491]: Removed session 11. Mar 17 17:47:12.336410 kubelet[2656]: E0317 17:47:12.336361 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:12.337005 kubelet[2656]: E0317 17:47:12.336361 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:17.259824 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:51442.service - OpenSSH per-connection server daemon (10.0.0.1:51442). Mar 17 17:47:17.304570 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 51442 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:17.306468 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:17.311528 systemd-logind[1491]: New session 12 of user core. Mar 17 17:47:17.320943 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:47:17.431415 sshd[4134]: Connection closed by 10.0.0.1 port 51442 Mar 17 17:47:17.431883 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:17.435265 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:51442.service: Deactivated successfully. Mar 17 17:47:17.437576 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:47:17.439375 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:47:17.440802 systemd-logind[1491]: Removed session 12. Mar 17 17:47:22.444590 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:51446.service - OpenSSH per-connection server daemon (10.0.0.1:51446). Mar 17 17:47:22.486793 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 51446 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:22.489076 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:22.493993 systemd-logind[1491]: New session 13 of user core. Mar 17 17:47:22.504669 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:47:22.835769 sshd[4153]: Connection closed by 10.0.0.1 port 51446 Mar 17 17:47:22.836296 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:22.857389 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:51446.service: Deactivated successfully. Mar 17 17:47:22.860159 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:47:22.862403 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:47:22.877387 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:51450.service - OpenSSH per-connection server daemon (10.0.0.1:51450). Mar 17 17:47:22.879014 systemd-logind[1491]: Removed session 13. Mar 17 17:47:22.925435 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 51450 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:22.927584 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:22.933580 systemd-logind[1491]: New session 14 of user core. Mar 17 17:47:22.945075 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:47:23.388042 sshd[4169]: Connection closed by 10.0.0.1 port 51450 Mar 17 17:47:23.389335 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:23.415994 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:51450.service: Deactivated successfully. Mar 17 17:47:23.429148 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:47:23.437536 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:47:23.458449 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:51454.service - OpenSSH per-connection server daemon (10.0.0.1:51454). Mar 17 17:47:23.460535 systemd-logind[1491]: Removed session 14. Mar 17 17:47:23.544490 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 51454 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:23.547660 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:23.562899 systemd-logind[1491]: New session 15 of user core. Mar 17 17:47:23.569055 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:47:23.842645 sshd[4183]: Connection closed by 10.0.0.1 port 51454 Mar 17 17:47:23.844914 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:23.852970 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:51454.service: Deactivated successfully. Mar 17 17:47:23.864474 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:47:23.872110 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:47:23.880836 systemd-logind[1491]: Removed session 15. Mar 17 17:47:28.856501 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:40658.service - OpenSSH per-connection server daemon (10.0.0.1:40658). Mar 17 17:47:28.900193 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 40658 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:28.902163 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:28.907339 systemd-logind[1491]: New session 16 of user core. Mar 17 17:47:28.916930 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:47:29.142132 sshd[4200]: Connection closed by 10.0.0.1 port 40658 Mar 17 17:47:29.142441 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:29.147906 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:40658.service: Deactivated successfully. Mar 17 17:47:29.150844 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:47:29.151715 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:47:29.152871 systemd-logind[1491]: Removed session 16. Mar 17 17:47:34.158973 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:40668.service - OpenSSH per-connection server daemon (10.0.0.1:40668). Mar 17 17:47:34.199373 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 40668 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:34.201208 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:34.206601 systemd-logind[1491]: New session 17 of user core. Mar 17 17:47:34.216959 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:47:34.351434 sshd[4217]: Connection closed by 10.0.0.1 port 40668 Mar 17 17:47:34.351835 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:34.356136 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:40668.service: Deactivated successfully. Mar 17 17:47:34.358162 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:47:34.359030 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:47:34.359966 systemd-logind[1491]: Removed session 17. Mar 17 17:47:39.369585 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:37320.service - OpenSSH per-connection server daemon (10.0.0.1:37320). Mar 17 17:47:39.408886 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 37320 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:39.410584 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:39.415780 systemd-logind[1491]: New session 18 of user core. Mar 17 17:47:39.422853 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:47:39.538918 sshd[4232]: Connection closed by 10.0.0.1 port 37320 Mar 17 17:47:39.539284 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:39.543609 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:37320.service: Deactivated successfully. Mar 17 17:47:39.545937 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:47:39.547041 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:47:39.548168 systemd-logind[1491]: Removed session 18. Mar 17 17:47:44.555182 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Mar 17 17:47:44.598123 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:44.600018 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:44.605082 systemd-logind[1491]: New session 19 of user core. Mar 17 17:47:44.618843 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:47:44.769135 sshd[4248]: Connection closed by 10.0.0.1 port 37322 Mar 17 17:47:44.769512 sshd-session[4246]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:44.774192 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:37322.service: Deactivated successfully. Mar 17 17:47:44.776475 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:47:44.777306 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:47:44.778208 systemd-logind[1491]: Removed session 19. Mar 17 17:47:45.278469 kubelet[2656]: E0317 17:47:45.278397 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:46.277583 kubelet[2656]: E0317 17:47:46.277515 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:46.277583 kubelet[2656]: E0317 17:47:46.277593 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:49.277678 kubelet[2656]: E0317 17:47:49.277548 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:49.783552 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:55946.service - OpenSSH per-connection server daemon (10.0.0.1:55946). Mar 17 17:47:49.827684 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 55946 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:49.829503 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:49.835725 systemd-logind[1491]: New session 20 of user core. Mar 17 17:47:49.845036 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:47:49.974082 sshd[4263]: Connection closed by 10.0.0.1 port 55946 Mar 17 17:47:49.974551 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:49.989107 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:55946.service: Deactivated successfully. Mar 17 17:47:49.991480 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:47:49.993236 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:47:49.999180 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:55958.service - OpenSSH per-connection server daemon (10.0.0.1:55958). Mar 17 17:47:50.000955 systemd-logind[1491]: Removed session 20. Mar 17 17:47:50.034844 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 55958 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:50.036594 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:50.041161 systemd-logind[1491]: New session 21 of user core. Mar 17 17:47:50.052937 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:47:50.278167 kubelet[2656]: E0317 17:47:50.278123 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:47:51.046389 sshd[4278]: Connection closed by 10.0.0.1 port 55958 Mar 17 17:47:51.047073 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:51.062684 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:55958.service: Deactivated successfully. Mar 17 17:47:51.064570 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:47:51.066060 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:47:51.067507 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:55964.service - OpenSSH per-connection server daemon (10.0.0.1:55964). Mar 17 17:47:51.068432 systemd-logind[1491]: Removed session 21. Mar 17 17:47:51.132262 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 55964 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:51.134137 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:51.139180 systemd-logind[1491]: New session 22 of user core. Mar 17 17:47:51.149877 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:47:53.001199 sshd[4292]: Connection closed by 10.0.0.1 port 55964 Mar 17 17:47:53.001750 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:53.012935 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:55964.service: Deactivated successfully. Mar 17 17:47:53.015091 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:47:53.016892 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:47:53.022986 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:55980.service - OpenSSH per-connection server daemon (10.0.0.1:55980). Mar 17 17:47:53.024478 systemd-logind[1491]: Removed session 22. Mar 17 17:47:53.059622 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 55980 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:53.061355 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:53.066061 systemd-logind[1491]: New session 23 of user core. Mar 17 17:47:53.073873 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:47:54.026664 sshd[4334]: Connection closed by 10.0.0.1 port 55980 Mar 17 17:47:54.027163 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:54.037666 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:55980.service: Deactivated successfully. Mar 17 17:47:54.039802 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:47:54.041519 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:47:54.050052 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:55992.service - OpenSSH per-connection server daemon (10.0.0.1:55992). Mar 17 17:47:54.051274 systemd-logind[1491]: Removed session 23. Mar 17 17:47:54.085969 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 55992 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:54.087598 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:54.092785 systemd-logind[1491]: New session 24 of user core. Mar 17 17:47:54.099842 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:47:54.221599 sshd[4348]: Connection closed by 10.0.0.1 port 55992 Mar 17 17:47:54.222092 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:54.226960 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:55992.service: Deactivated successfully. Mar 17 17:47:54.229131 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:47:54.229980 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:47:54.231212 systemd-logind[1491]: Removed session 24. Mar 17 17:47:59.239364 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Mar 17 17:47:59.285909 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:47:59.286638 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:47:59.291964 systemd-logind[1491]: New session 25 of user core. Mar 17 17:47:59.298884 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:47:59.423073 sshd[4363]: Connection closed by 10.0.0.1 port 60554 Mar 17 17:47:59.423529 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Mar 17 17:47:59.428665 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:60554.service: Deactivated successfully. Mar 17 17:47:59.431777 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:47:59.432838 systemd-logind[1491]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:47:59.434039 systemd-logind[1491]: Removed session 25. Mar 17 17:48:04.437624 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:60566.service - OpenSSH per-connection server daemon (10.0.0.1:60566). Mar 17 17:48:04.481555 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 60566 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:04.483286 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:04.489384 systemd-logind[1491]: New session 26 of user core. Mar 17 17:48:04.501065 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:48:04.619932 sshd[4380]: Connection closed by 10.0.0.1 port 60566 Mar 17 17:48:04.620328 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:04.624135 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:60566.service: Deactivated successfully. Mar 17 17:48:04.626297 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:48:04.627094 systemd-logind[1491]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:48:04.628011 systemd-logind[1491]: Removed session 26. Mar 17 17:48:09.637103 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:51036.service - OpenSSH per-connection server daemon (10.0.0.1:51036). Mar 17 17:48:09.679442 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 51036 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:09.681112 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:09.685835 systemd-logind[1491]: New session 27 of user core. Mar 17 17:48:09.702860 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:48:09.845568 sshd[4395]: Connection closed by 10.0.0.1 port 51036 Mar 17 17:48:09.846110 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:09.851081 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:51036.service: Deactivated successfully. Mar 17 17:48:09.853354 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:48:09.854454 systemd-logind[1491]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:48:09.855755 systemd-logind[1491]: Removed session 27. Mar 17 17:48:14.859437 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:51044.service - OpenSSH per-connection server daemon (10.0.0.1:51044). Mar 17 17:48:14.902435 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 51044 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:14.904494 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:14.910154 systemd-logind[1491]: New session 28 of user core. Mar 17 17:48:14.922015 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:48:15.064488 sshd[4414]: Connection closed by 10.0.0.1 port 51044 Mar 17 17:48:15.065183 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:15.070673 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:51044.service: Deactivated successfully. Mar 17 17:48:15.073718 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:48:15.075143 systemd-logind[1491]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:48:15.076621 systemd-logind[1491]: Removed session 28. Mar 17 17:48:20.097042 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:54758.service - OpenSSH per-connection server daemon (10.0.0.1:54758). Mar 17 17:48:20.135077 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 54758 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:20.136977 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:20.142414 systemd-logind[1491]: New session 29 of user core. Mar 17 17:48:20.152865 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:48:20.278543 kubelet[2656]: E0317 17:48:20.278487 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:20.352791 sshd[4430]: Connection closed by 10.0.0.1 port 54758 Mar 17 17:48:20.353138 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:20.357549 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:54758.service: Deactivated successfully. Mar 17 17:48:20.359841 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:48:20.360511 systemd-logind[1491]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:48:20.361593 systemd-logind[1491]: Removed session 29. Mar 17 17:48:25.370029 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Mar 17 17:48:25.410758 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:25.412568 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:25.417480 systemd-logind[1491]: New session 30 of user core. Mar 17 17:48:25.428824 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:48:25.558261 sshd[4446]: Connection closed by 10.0.0.1 port 54764 Mar 17 17:48:25.558651 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:25.563484 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:54764.service: Deactivated successfully. Mar 17 17:48:25.566082 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:48:25.567073 systemd-logind[1491]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:48:25.568326 systemd-logind[1491]: Removed session 30. Mar 17 17:48:30.573928 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:47494.service - OpenSSH per-connection server daemon (10.0.0.1:47494). Mar 17 17:48:30.623064 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 47494 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:30.624751 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:30.629182 systemd-logind[1491]: New session 31 of user core. Mar 17 17:48:30.647829 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 17:48:30.951279 sshd[4464]: Connection closed by 10.0.0.1 port 47494 Mar 17 17:48:30.951689 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:30.964346 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:47494.service: Deactivated successfully. Mar 17 17:48:30.966682 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 17:48:30.968261 systemd-logind[1491]: Session 31 logged out. Waiting for processes to exit. Mar 17 17:48:30.970008 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:47504.service - OpenSSH per-connection server daemon (10.0.0.1:47504). Mar 17 17:48:30.970941 systemd-logind[1491]: Removed session 31. Mar 17 17:48:31.020467 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 47504 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:31.021956 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:31.026361 systemd-logind[1491]: New session 32 of user core. Mar 17 17:48:31.035823 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 17 17:48:31.280763 kubelet[2656]: E0317 17:48:31.277878 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:32.993129 containerd[1511]: time="2025-03-17T17:48:32.993031327Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:48:33.022585 containerd[1511]: time="2025-03-17T17:48:33.022510423Z" level=info msg="StopContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" with timeout 2 (s)" Mar 17 17:48:33.032406 containerd[1511]: time="2025-03-17T17:48:33.032345976Z" level=info msg="Stop container \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" with signal terminated" Mar 17 17:48:33.039984 systemd-networkd[1422]: lxc_health: Link DOWN Mar 17 17:48:33.039997 systemd-networkd[1422]: lxc_health: Lost carrier Mar 17 17:48:33.066741 systemd[1]: cri-containerd-5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a.scope: Deactivated successfully. Mar 17 17:48:33.067236 systemd[1]: cri-containerd-5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a.scope: Consumed 7.703s CPU time, 120.7M memory peak, 148K read from disk, 13.3M written to disk. Mar 17 17:48:33.088061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a-rootfs.mount: Deactivated successfully. Mar 17 17:48:33.165089 containerd[1511]: time="2025-03-17T17:48:33.164988923Z" level=info msg="StopContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" with timeout 30 (s)" Mar 17 17:48:33.165916 containerd[1511]: time="2025-03-17T17:48:33.165771723Z" level=info msg="Stop container \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" with signal terminated" Mar 17 17:48:33.178233 systemd[1]: cri-containerd-3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd.scope: Deactivated successfully. Mar 17 17:48:33.310463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd-rootfs.mount: Deactivated successfully. Mar 17 17:48:33.563906 containerd[1511]: time="2025-03-17T17:48:33.563672209Z" level=info msg="shim disconnected" id=5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a namespace=k8s.io Mar 17 17:48:33.563906 containerd[1511]: time="2025-03-17T17:48:33.563790406Z" level=warning msg="cleaning up after shim disconnected" id=5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a namespace=k8s.io Mar 17 17:48:33.563906 containerd[1511]: time="2025-03-17T17:48:33.563805626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:33.564173 containerd[1511]: time="2025-03-17T17:48:33.563954761Z" level=info msg="shim disconnected" id=3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd namespace=k8s.io Mar 17 17:48:33.564173 containerd[1511]: time="2025-03-17T17:48:33.564016750Z" level=warning msg="cleaning up after shim disconnected" id=3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd namespace=k8s.io Mar 17 17:48:33.564173 containerd[1511]: time="2025-03-17T17:48:33.564028142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:33.671843 containerd[1511]: time="2025-03-17T17:48:33.671766991Z" level=info msg="StopContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" returns successfully" Mar 17 17:48:33.675541 containerd[1511]: time="2025-03-17T17:48:33.675489136Z" level=info msg="StopPodSandbox for \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\"" Mar 17 17:48:33.678778 containerd[1511]: time="2025-03-17T17:48:33.675538811Z" level=info msg="Container to stop \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.678778 containerd[1511]: time="2025-03-17T17:48:33.678758453Z" level=info msg="Container to stop \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.678778 containerd[1511]: time="2025-03-17T17:48:33.678769303Z" level=info msg="Container to stop \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.678778 containerd[1511]: time="2025-03-17T17:48:33.678778161Z" level=info msg="Container to stop \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.678938 containerd[1511]: time="2025-03-17T17:48:33.678791115Z" level=info msg="Container to stop \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.681801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad-shm.mount: Deactivated successfully. Mar 17 17:48:33.685472 systemd[1]: cri-containerd-a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad.scope: Deactivated successfully. Mar 17 17:48:33.690529 containerd[1511]: time="2025-03-17T17:48:33.690474120Z" level=info msg="StopContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" returns successfully" Mar 17 17:48:33.691232 containerd[1511]: time="2025-03-17T17:48:33.691027911Z" level=info msg="StopPodSandbox for \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\"" Mar 17 17:48:33.691232 containerd[1511]: time="2025-03-17T17:48:33.691073900Z" level=info msg="Container to stop \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:48:33.695170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731-shm.mount: Deactivated successfully. Mar 17 17:48:33.699147 systemd[1]: cri-containerd-e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731.scope: Deactivated successfully. Mar 17 17:48:33.907679 containerd[1511]: time="2025-03-17T17:48:33.907566354Z" level=info msg="shim disconnected" id=a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad namespace=k8s.io Mar 17 17:48:33.907679 containerd[1511]: time="2025-03-17T17:48:33.907634455Z" level=warning msg="cleaning up after shim disconnected" id=a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad namespace=k8s.io Mar 17 17:48:33.908159 containerd[1511]: time="2025-03-17T17:48:33.907646358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:33.908781 containerd[1511]: time="2025-03-17T17:48:33.907675173Z" level=info msg="shim disconnected" id=e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731 namespace=k8s.io Mar 17 17:48:33.908781 containerd[1511]: time="2025-03-17T17:48:33.908495645Z" level=warning msg="cleaning up after shim disconnected" id=e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731 namespace=k8s.io Mar 17 17:48:33.908781 containerd[1511]: time="2025-03-17T17:48:33.908508681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:33.925812 containerd[1511]: time="2025-03-17T17:48:33.925756302Z" level=info msg="TearDown network for sandbox \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" successfully" Mar 17 17:48:33.925812 containerd[1511]: time="2025-03-17T17:48:33.925807459Z" level=info msg="StopPodSandbox for \"a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad\" returns successfully" Mar 17 17:48:33.925986 containerd[1511]: time="2025-03-17T17:48:33.925933942Z" level=info msg="TearDown network for sandbox \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\" successfully" Mar 17 17:48:33.925986 containerd[1511]: time="2025-03-17T17:48:33.925968028Z" level=info msg="StopPodSandbox for \"e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731\" returns successfully" Mar 17 17:48:33.968472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1fbed3f59b9f2b1b799fbc5ed7bcc4b0058b9a1a9a894a72c8041570d7493ad-rootfs.mount: Deactivated successfully. Mar 17 17:48:33.968649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1fc2ca70bccf3d17ae68cb44db5eb5c81d306e47823a579dac6f1c96fc5f731-rootfs.mount: Deactivated successfully. Mar 17 17:48:34.037532 kubelet[2656]: I0317 17:48:34.037469 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-net\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.037532 kubelet[2656]: I0317 17:48:34.037523 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-clustermesh-secrets\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.037532 kubelet[2656]: I0317 17:48:34.037542 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-config-path\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038222 kubelet[2656]: I0317 17:48:34.037560 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwcld\" (UniqueName: \"kubernetes.io/projected/40898f7e-7a49-4b36-acd7-e0548b3d94d2-kube-api-access-gwcld\") pod \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\" (UID: \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\") " Mar 17 17:48:34.038222 kubelet[2656]: I0317 17:48:34.037576 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cni-path\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038539 kubelet[2656]: I0317 17:48:34.038417 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-cgroup\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038539 kubelet[2656]: I0317 17:48:34.038468 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hostproc\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038539 kubelet[2656]: I0317 17:48:34.038498 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hubble-tls\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038539 kubelet[2656]: I0317 17:48:34.038520 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-kernel\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038539 kubelet[2656]: I0317 17:48:34.038547 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-xtables-lock\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038567 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-bpf-maps\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038593 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq5fx\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-kube-api-access-nq5fx\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038617 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-lib-modules\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038642 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40898f7e-7a49-4b36-acd7-e0548b3d94d2-cilium-config-path\") pod \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\" (UID: \"40898f7e-7a49-4b36-acd7-e0548b3d94d2\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038666 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-run\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.038880 kubelet[2656]: I0317 17:48:34.038691 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-etc-cni-netd\") pod \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\" (UID: \"5da62c3a-9819-43c6-8ea8-afe9d4a3bca9\") " Mar 17 17:48:34.039100 kubelet[2656]: I0317 17:48:34.037678 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.039100 kubelet[2656]: I0317 17:48:34.038267 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cni-path" (OuterVolumeSpecName: "cni-path") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.039100 kubelet[2656]: I0317 17:48:34.038845 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.039100 kubelet[2656]: I0317 17:48:34.038920 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.039100 kubelet[2656]: I0317 17:48:34.038943 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hostproc" (OuterVolumeSpecName: "hostproc") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.041256 kubelet[2656]: I0317 17:48:34.041211 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.041661 kubelet[2656]: I0317 17:48:34.041356 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.041661 kubelet[2656]: I0317 17:48:34.041380 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.042146 kubelet[2656]: I0317 17:48:34.042115 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:48:34.042334 kubelet[2656]: I0317 17:48:34.042308 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:48:34.042379 kubelet[2656]: I0317 17:48:34.042358 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.042413 kubelet[2656]: I0317 17:48:34.042388 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:48:34.043564 systemd[1]: var-lib-kubelet-pods-5da62c3a\x2d9819\x2d43c6\x2d8ea8\x2dafe9d4a3bca9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:48:34.044767 kubelet[2656]: I0317 17:48:34.044632 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40898f7e-7a49-4b36-acd7-e0548b3d94d2-kube-api-access-gwcld" (OuterVolumeSpecName: "kube-api-access-gwcld") pod "40898f7e-7a49-4b36-acd7-e0548b3d94d2" (UID: "40898f7e-7a49-4b36-acd7-e0548b3d94d2"). InnerVolumeSpecName "kube-api-access-gwcld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:34.051197 systemd[1]: var-lib-kubelet-pods-5da62c3a\x2d9819\x2d43c6\x2d8ea8\x2dafe9d4a3bca9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:48:34.051332 systemd[1]: var-lib-kubelet-pods-5da62c3a\x2d9819\x2d43c6\x2d8ea8\x2dafe9d4a3bca9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnq5fx.mount: Deactivated successfully. Mar 17 17:48:34.051427 systemd[1]: var-lib-kubelet-pods-40898f7e\x2d7a49\x2d4b36\x2dacd7\x2de0548b3d94d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgwcld.mount: Deactivated successfully. Mar 17 17:48:34.051581 kubelet[2656]: I0317 17:48:34.051545 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:34.051828 kubelet[2656]: I0317 17:48:34.051758 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-kube-api-access-nq5fx" (OuterVolumeSpecName: "kube-api-access-nq5fx") pod "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" (UID: "5da62c3a-9819-43c6-8ea8-afe9d4a3bca9"). InnerVolumeSpecName "kube-api-access-nq5fx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:48:34.053545 kubelet[2656]: I0317 17:48:34.053501 2656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40898f7e-7a49-4b36-acd7-e0548b3d94d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "40898f7e-7a49-4b36-acd7-e0548b3d94d2" (UID: "40898f7e-7a49-4b36-acd7-e0548b3d94d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.138954 2656 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.138991 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40898f7e-7a49-4b36-acd7-e0548b3d94d2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.139003 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.139011 2656 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.139019 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139011 kubelet[2656]: I0317 17:48:34.139029 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139037 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwcld\" (UniqueName: \"kubernetes.io/projected/40898f7e-7a49-4b36-acd7-e0548b3d94d2-kube-api-access-gwcld\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139056 2656 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139064 2656 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139071 2656 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139079 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139088 2656 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139095 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139331 kubelet[2656]: I0317 17:48:34.139103 2656 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139557 kubelet[2656]: I0317 17:48:34.139111 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nq5fx\" (UniqueName: \"kubernetes.io/projected/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-kube-api-access-nq5fx\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.139557 kubelet[2656]: I0317 17:48:34.139118 2656 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:48:34.440283 sshd[4479]: Connection closed by 10.0.0.1 port 47504 Mar 17 17:48:34.453565 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:47518.service - OpenSSH per-connection server daemon (10.0.0.1:47518). Mar 17 17:48:34.484599 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:34.489309 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:47504.service: Deactivated successfully. Mar 17 17:48:34.492087 systemd[1]: session-32.scope: Deactivated successfully. Mar 17 17:48:34.493670 systemd-logind[1491]: Session 32 logged out. Waiting for processes to exit. Mar 17 17:48:34.495256 systemd-logind[1491]: Removed session 32. Mar 17 17:48:34.522393 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 47518 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:34.524324 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:34.530064 systemd-logind[1491]: New session 33 of user core. Mar 17 17:48:34.535089 kubelet[2656]: I0317 17:48:34.535054 2656 scope.go:117] "RemoveContainer" containerID="3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd" Mar 17 17:48:34.540434 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 17 17:48:34.543218 systemd[1]: Removed slice kubepods-besteffort-pod40898f7e_7a49_4b36_acd7_e0548b3d94d2.slice - libcontainer container kubepods-besteffort-pod40898f7e_7a49_4b36_acd7_e0548b3d94d2.slice. Mar 17 17:48:34.545392 containerd[1511]: time="2025-03-17T17:48:34.544643187Z" level=info msg="RemoveContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\"" Mar 17 17:48:34.549198 systemd[1]: Removed slice kubepods-burstable-pod5da62c3a_9819_43c6_8ea8_afe9d4a3bca9.slice - libcontainer container kubepods-burstable-pod5da62c3a_9819_43c6_8ea8_afe9d4a3bca9.slice. Mar 17 17:48:34.549607 systemd[1]: kubepods-burstable-pod5da62c3a_9819_43c6_8ea8_afe9d4a3bca9.slice: Consumed 7.821s CPU time, 121M memory peak, 160K read from disk, 13.3M written to disk. Mar 17 17:48:34.669619 containerd[1511]: time="2025-03-17T17:48:34.669553324Z" level=info msg="RemoveContainer for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" returns successfully" Mar 17 17:48:34.670003 kubelet[2656]: I0317 17:48:34.669966 2656 scope.go:117] "RemoveContainer" containerID="3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd" Mar 17 17:48:34.670387 containerd[1511]: time="2025-03-17T17:48:34.670320826Z" level=error msg="ContainerStatus for \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\": not found" Mar 17 17:48:34.677259 kubelet[2656]: E0317 17:48:34.677212 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\": not found" containerID="3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd" Mar 17 17:48:34.677429 kubelet[2656]: I0317 17:48:34.677272 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd"} err="failed to get container status \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cb0f0bbef151bc2302c0098aa2c3317ebcb876e1533353dfd54e24bdbfda1cd\": not found" Mar 17 17:48:34.677429 kubelet[2656]: I0317 17:48:34.677368 2656 scope.go:117] "RemoveContainer" containerID="5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a" Mar 17 17:48:34.679395 containerd[1511]: time="2025-03-17T17:48:34.679347910Z" level=info msg="RemoveContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\"" Mar 17 17:48:34.745902 containerd[1511]: time="2025-03-17T17:48:34.745616855Z" level=info msg="RemoveContainer for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" returns successfully" Mar 17 17:48:34.746100 kubelet[2656]: I0317 17:48:34.745939 2656 scope.go:117] "RemoveContainer" containerID="c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a" Mar 17 17:48:34.749300 containerd[1511]: time="2025-03-17T17:48:34.749258185Z" level=info msg="RemoveContainer for \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\"" Mar 17 17:48:34.839273 containerd[1511]: time="2025-03-17T17:48:34.839201804Z" level=info msg="RemoveContainer for \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\" returns successfully" Mar 17 17:48:34.839504 kubelet[2656]: I0317 17:48:34.839485 2656 scope.go:117] "RemoveContainer" containerID="675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37" Mar 17 17:48:34.842013 containerd[1511]: time="2025-03-17T17:48:34.841537963Z" level=info msg="RemoveContainer for \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\"" Mar 17 17:48:34.961960 containerd[1511]: time="2025-03-17T17:48:34.961893390Z" level=info msg="RemoveContainer for \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\" returns successfully" Mar 17 17:48:34.962293 kubelet[2656]: I0317 17:48:34.962255 2656 scope.go:117] "RemoveContainer" containerID="70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9" Mar 17 17:48:34.963585 containerd[1511]: time="2025-03-17T17:48:34.963557159Z" level=info msg="RemoveContainer for \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\"" Mar 17 17:48:35.019771 containerd[1511]: time="2025-03-17T17:48:35.019587609Z" level=info msg="RemoveContainer for \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\" returns successfully" Mar 17 17:48:35.019924 kubelet[2656]: I0317 17:48:35.019881 2656 scope.go:117] "RemoveContainer" containerID="863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf" Mar 17 17:48:35.021104 containerd[1511]: time="2025-03-17T17:48:35.021067907Z" level=info msg="RemoveContainer for \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\"" Mar 17 17:48:35.090722 containerd[1511]: time="2025-03-17T17:48:35.090639110Z" level=info msg="RemoveContainer for \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\" returns successfully" Mar 17 17:48:35.091116 kubelet[2656]: I0317 17:48:35.091068 2656 scope.go:117] "RemoveContainer" containerID="5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a" Mar 17 17:48:35.091596 kubelet[2656]: E0317 17:48:35.091510 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\": not found" containerID="5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a" Mar 17 17:48:35.091596 kubelet[2656]: I0317 17:48:35.091553 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a"} err="failed to get container status \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\": not found" Mar 17 17:48:35.091596 kubelet[2656]: I0317 17:48:35.091582 2656 scope.go:117] "RemoveContainer" containerID="c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a" Mar 17 17:48:35.091681 containerd[1511]: time="2025-03-17T17:48:35.091382014Z" level=error msg="ContainerStatus for \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a4f577cccf55f9b72d33176353ac0f2c2eb75dd54278d6cd8e407a99108270a\": not found" Mar 17 17:48:35.091922 containerd[1511]: time="2025-03-17T17:48:35.091836836Z" level=error msg="ContainerStatus for \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\": not found" Mar 17 17:48:35.091983 kubelet[2656]: E0317 17:48:35.091959 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\": not found" containerID="c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a" Mar 17 17:48:35.092012 kubelet[2656]: I0317 17:48:35.091985 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a"} err="failed to get container status \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7a47ed572b56a56b2befcc2fc14f4be2f1cfd55329a182964aa09cd50560c8a\": not found" Mar 17 17:48:35.092012 kubelet[2656]: I0317 17:48:35.092006 2656 scope.go:117] "RemoveContainer" containerID="675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37" Mar 17 17:48:35.092395 containerd[1511]: time="2025-03-17T17:48:35.092312057Z" level=error msg="ContainerStatus for \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\": not found" Mar 17 17:48:35.092518 kubelet[2656]: E0317 17:48:35.092472 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\": not found" containerID="675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37" Mar 17 17:48:35.092518 kubelet[2656]: I0317 17:48:35.092509 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37"} err="failed to get container status \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"675e874c34f509a72e3bc05a5cd8ce034e3f2c6d115e81b5936044b3e6db3f37\": not found" Mar 17 17:48:35.092624 kubelet[2656]: I0317 17:48:35.092531 2656 scope.go:117] "RemoveContainer" containerID="70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9" Mar 17 17:48:35.092751 containerd[1511]: time="2025-03-17T17:48:35.092688698Z" level=error msg="ContainerStatus for \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\": not found" Mar 17 17:48:35.093137 kubelet[2656]: E0317 17:48:35.093006 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\": not found" containerID="70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9" Mar 17 17:48:35.093137 kubelet[2656]: I0317 17:48:35.093037 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9"} err="failed to get container status \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"70189a6b36726a6d67ecba0d61cee43819372c5868ec9c1df364f4a269aad0d9\": not found" Mar 17 17:48:35.093137 kubelet[2656]: I0317 17:48:35.093072 2656 scope.go:117] "RemoveContainer" containerID="863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf" Mar 17 17:48:35.093377 containerd[1511]: time="2025-03-17T17:48:35.093324688Z" level=error msg="ContainerStatus for \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\": not found" Mar 17 17:48:35.093499 kubelet[2656]: E0317 17:48:35.093475 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\": not found" containerID="863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf" Mar 17 17:48:35.093534 kubelet[2656]: I0317 17:48:35.093499 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf"} err="failed to get container status \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\": rpc error: code = NotFound desc = an error occurred when try to find container \"863c9f0618b14d8a5c02d42a51ec80d3cbd2bd9d1505b1e4f8af7ffa73945caf\": not found" Mar 17 17:48:35.280582 kubelet[2656]: I0317 17:48:35.280420 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40898f7e-7a49-4b36-acd7-e0548b3d94d2" path="/var/lib/kubelet/pods/40898f7e-7a49-4b36-acd7-e0548b3d94d2/volumes" Mar 17 17:48:35.281179 kubelet[2656]: I0317 17:48:35.281078 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" path="/var/lib/kubelet/pods/5da62c3a-9819-43c6-8ea8-afe9d4a3bca9/volumes" Mar 17 17:48:35.319815 sshd[4646]: Connection closed by 10.0.0.1 port 47518 Mar 17 17:48:35.320304 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:35.342662 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:47518.service: Deactivated successfully. Mar 17 17:48:35.348263 systemd[1]: session-33.scope: Deactivated successfully. Mar 17 17:48:35.353427 systemd-logind[1491]: Session 33 logged out. Waiting for processes to exit. Mar 17 17:48:35.368357 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:47522.service - OpenSSH per-connection server daemon (10.0.0.1:47522). Mar 17 17:48:35.369633 systemd-logind[1491]: Removed session 33. Mar 17 17:48:35.407039 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 47522 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:35.409022 sshd-session[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:35.414311 systemd-logind[1491]: New session 34 of user core. Mar 17 17:48:35.419890 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 17 17:48:35.447912 kubelet[2656]: I0317 17:48:35.447867 2656 memory_manager.go:355] "RemoveStaleState removing state" podUID="40898f7e-7a49-4b36-acd7-e0548b3d94d2" containerName="cilium-operator" Mar 17 17:48:35.447912 kubelet[2656]: I0317 17:48:35.447898 2656 memory_manager.go:355] "RemoveStaleState removing state" podUID="5da62c3a-9819-43c6-8ea8-afe9d4a3bca9" containerName="cilium-agent" Mar 17 17:48:35.459860 systemd[1]: Created slice kubepods-burstable-pod2131a613_79d5_4ce7_8201_f7c099e4bb1d.slice - libcontainer container kubepods-burstable-pod2131a613_79d5_4ce7_8201_f7c099e4bb1d.slice. Mar 17 17:48:35.474001 sshd[4660]: Connection closed by 10.0.0.1 port 47522 Mar 17 17:48:35.474426 sshd-session[4657]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:35.486894 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:47522.service: Deactivated successfully. Mar 17 17:48:35.489207 systemd[1]: session-34.scope: Deactivated successfully. Mar 17 17:48:35.492789 systemd-logind[1491]: Session 34 logged out. Waiting for processes to exit. Mar 17 17:48:35.500215 systemd[1]: Started sshd@34-10.0.0.15:22-10.0.0.1:47536.service - OpenSSH per-connection server daemon (10.0.0.1:47536). Mar 17 17:48:35.501800 systemd-logind[1491]: Removed session 34. Mar 17 17:48:35.541334 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 47536 ssh2: RSA SHA256:fvq/EnOzAjyVAI7Ny/Y8iSI7Zce+5eYVas+A6dENwjM Mar 17 17:48:35.543724 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:48:35.548663 kubelet[2656]: I0317 17:48:35.548626 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2131a613-79d5-4ce7-8201-f7c099e4bb1d-hubble-tls\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.548755 kubelet[2656]: I0317 17:48:35.548670 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-host-proc-sys-kernel\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.548755 kubelet[2656]: I0317 17:48:35.548711 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-cilium-run\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.548755 kubelet[2656]: I0317 17:48:35.548735 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2131a613-79d5-4ce7-8201-f7c099e4bb1d-cilium-config-path\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.548755 kubelet[2656]: I0317 17:48:35.548754 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsl75\" (UniqueName: \"kubernetes.io/projected/2131a613-79d5-4ce7-8201-f7c099e4bb1d-kube-api-access-rsl75\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548771 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-host-proc-sys-net\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548790 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-hostproc\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548805 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-cilium-cgroup\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548820 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-lib-modules\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548838 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2131a613-79d5-4ce7-8201-f7c099e4bb1d-clustermesh-secrets\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549126 kubelet[2656]: I0317 17:48:35.548869 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-bpf-maps\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549319 kubelet[2656]: I0317 17:48:35.548883 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-xtables-lock\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549319 kubelet[2656]: I0317 17:48:35.548895 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2131a613-79d5-4ce7-8201-f7c099e4bb1d-cilium-ipsec-secrets\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549319 kubelet[2656]: I0317 17:48:35.548909 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-cni-path\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549319 kubelet[2656]: I0317 17:48:35.548923 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2131a613-79d5-4ce7-8201-f7c099e4bb1d-etc-cni-netd\") pod \"cilium-2qwwl\" (UID: \"2131a613-79d5-4ce7-8201-f7c099e4bb1d\") " pod="kube-system/cilium-2qwwl" Mar 17 17:48:35.549644 systemd-logind[1491]: New session 35 of user core. Mar 17 17:48:35.560920 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 17 17:48:36.063820 kubelet[2656]: E0317 17:48:36.063766 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:36.064370 containerd[1511]: time="2025-03-17T17:48:36.064329227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qwwl,Uid:2131a613-79d5-4ce7-8201-f7c099e4bb1d,Namespace:kube-system,Attempt:0,}" Mar 17 17:48:36.213685 containerd[1511]: time="2025-03-17T17:48:36.212868493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:48:36.213685 containerd[1511]: time="2025-03-17T17:48:36.213647316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:48:36.213685 containerd[1511]: time="2025-03-17T17:48:36.213664940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:36.213917 containerd[1511]: time="2025-03-17T17:48:36.213775792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:48:36.233951 systemd[1]: Started cri-containerd-ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc.scope - libcontainer container ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc. Mar 17 17:48:36.264208 containerd[1511]: time="2025-03-17T17:48:36.264151591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qwwl,Uid:2131a613-79d5-4ce7-8201-f7c099e4bb1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\"" Mar 17 17:48:36.264913 kubelet[2656]: E0317 17:48:36.264868 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:36.267336 containerd[1511]: time="2025-03-17T17:48:36.267306780Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:48:36.546872 containerd[1511]: time="2025-03-17T17:48:36.546794907Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75\"" Mar 17 17:48:36.547962 containerd[1511]: time="2025-03-17T17:48:36.547906238Z" level=info msg="StartContainer for \"f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75\"" Mar 17 17:48:36.579935 systemd[1]: Started cri-containerd-f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75.scope - libcontainer container f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75. Mar 17 17:48:36.651870 systemd[1]: cri-containerd-f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75.scope: Deactivated successfully. Mar 17 17:48:36.762003 containerd[1511]: time="2025-03-17T17:48:36.761947212Z" level=info msg="StartContainer for \"f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75\" returns successfully" Mar 17 17:48:36.781393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75-rootfs.mount: Deactivated successfully. Mar 17 17:48:37.158662 containerd[1511]: time="2025-03-17T17:48:37.158585472Z" level=info msg="shim disconnected" id=f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75 namespace=k8s.io Mar 17 17:48:37.158662 containerd[1511]: time="2025-03-17T17:48:37.158651759Z" level=warning msg="cleaning up after shim disconnected" id=f19ab02e23238ac110e620f35e8a9c1769da606682ea2978e7f0dbff656b3c75 namespace=k8s.io Mar 17 17:48:37.158662 containerd[1511]: time="2025-03-17T17:48:37.158662881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:37.277634 kubelet[2656]: E0317 17:48:37.277583 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:37.381199 kubelet[2656]: E0317 17:48:37.381136 2656 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:48:37.551669 kubelet[2656]: E0317 17:48:37.551131 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:37.552821 containerd[1511]: time="2025-03-17T17:48:37.552761796Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:48:37.985424 containerd[1511]: time="2025-03-17T17:48:37.985350697Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9\"" Mar 17 17:48:37.986068 containerd[1511]: time="2025-03-17T17:48:37.986006304Z" level=info msg="StartContainer for \"7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9\"" Mar 17 17:48:38.024576 systemd[1]: Started cri-containerd-7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9.scope - libcontainer container 7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9. Mar 17 17:48:38.076142 systemd[1]: cri-containerd-7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9.scope: Deactivated successfully. Mar 17 17:48:38.201963 containerd[1511]: time="2025-03-17T17:48:38.201893929Z" level=info msg="StartContainer for \"7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9\" returns successfully" Mar 17 17:48:38.222725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9-rootfs.mount: Deactivated successfully. Mar 17 17:48:38.555163 kubelet[2656]: E0317 17:48:38.555125 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:38.626743 containerd[1511]: time="2025-03-17T17:48:38.624360143Z" level=info msg="shim disconnected" id=7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9 namespace=k8s.io Mar 17 17:48:38.626743 containerd[1511]: time="2025-03-17T17:48:38.624429456Z" level=warning msg="cleaning up after shim disconnected" id=7197b5015699a2e07820859e582fe05e9911f218651c121311092b860f9b5da9 namespace=k8s.io Mar 17 17:48:38.626743 containerd[1511]: time="2025-03-17T17:48:38.624439996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:39.558109 kubelet[2656]: E0317 17:48:39.558069 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:39.559894 containerd[1511]: time="2025-03-17T17:48:39.559835810Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:48:40.122008 kubelet[2656]: I0317 17:48:40.121842 2656 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:48:40Z","lastTransitionTime":"2025-03-17T17:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:48:40.144723 containerd[1511]: time="2025-03-17T17:48:40.144639773Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393\"" Mar 17 17:48:40.145347 containerd[1511]: time="2025-03-17T17:48:40.145315068Z" level=info msg="StartContainer for \"2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393\"" Mar 17 17:48:40.182061 systemd[1]: Started cri-containerd-2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393.scope - libcontainer container 2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393. Mar 17 17:48:40.220723 systemd[1]: cri-containerd-2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393.scope: Deactivated successfully. Mar 17 17:48:40.246539 containerd[1511]: time="2025-03-17T17:48:40.246463776Z" level=info msg="StartContainer for \"2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393\" returns successfully" Mar 17 17:48:40.273199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393-rootfs.mount: Deactivated successfully. Mar 17 17:48:40.373189 containerd[1511]: time="2025-03-17T17:48:40.372349570Z" level=info msg="shim disconnected" id=2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393 namespace=k8s.io Mar 17 17:48:40.373189 containerd[1511]: time="2025-03-17T17:48:40.372417590Z" level=warning msg="cleaning up after shim disconnected" id=2d6f3ed40e79d2bc84095be29f46abf6d43e18dc53eda81e381ad07b745a2393 namespace=k8s.io Mar 17 17:48:40.373189 containerd[1511]: time="2025-03-17T17:48:40.372429613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:40.562010 kubelet[2656]: E0317 17:48:40.561963 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:40.564106 containerd[1511]: time="2025-03-17T17:48:40.564048413Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:48:41.175371 containerd[1511]: time="2025-03-17T17:48:41.175273326Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67\"" Mar 17 17:48:41.176009 containerd[1511]: time="2025-03-17T17:48:41.175955474Z" level=info msg="StartContainer for \"d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67\"" Mar 17 17:48:41.216966 systemd[1]: Started cri-containerd-d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67.scope - libcontainer container d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67. Mar 17 17:48:41.246198 systemd[1]: cri-containerd-d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67.scope: Deactivated successfully. Mar 17 17:48:41.373422 containerd[1511]: time="2025-03-17T17:48:41.373317705Z" level=info msg="StartContainer for \"d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67\" returns successfully" Mar 17 17:48:41.393876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67-rootfs.mount: Deactivated successfully. Mar 17 17:48:41.563884 containerd[1511]: time="2025-03-17T17:48:41.563573608Z" level=info msg="shim disconnected" id=d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67 namespace=k8s.io Mar 17 17:48:41.563884 containerd[1511]: time="2025-03-17T17:48:41.563659754Z" level=warning msg="cleaning up after shim disconnected" id=d7dd86daf155889a19649f7bc4a818ce0b2525cda11ae141291793be295c2e67 namespace=k8s.io Mar 17 17:48:41.563884 containerd[1511]: time="2025-03-17T17:48:41.563670394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:48:41.566715 kubelet[2656]: E0317 17:48:41.566615 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:42.382687 kubelet[2656]: E0317 17:48:42.382641 2656 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:48:42.573389 kubelet[2656]: E0317 17:48:42.573354 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:42.575416 containerd[1511]: time="2025-03-17T17:48:42.575238020Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:48:43.204434 containerd[1511]: time="2025-03-17T17:48:43.204346586Z" level=info msg="CreateContainer within sandbox \"ac2a9afa7a10d32cbb7ae345ddf19f58c30cd7b8abcfec2b6bd1bb8785f8edfc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43\"" Mar 17 17:48:43.206564 containerd[1511]: time="2025-03-17T17:48:43.205027612Z" level=info msg="StartContainer for \"60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43\"" Mar 17 17:48:43.256836 systemd[1]: Started cri-containerd-60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43.scope - libcontainer container 60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43. Mar 17 17:48:43.409905 containerd[1511]: time="2025-03-17T17:48:43.409840435Z" level=info msg="StartContainer for \"60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43\" returns successfully" Mar 17 17:48:43.578053 kubelet[2656]: E0317 17:48:43.578025 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:43.766730 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 17:48:43.853563 kubelet[2656]: I0317 17:48:43.853394 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2qwwl" podStartSLOduration=8.853372907 podStartE2EDuration="8.853372907s" podCreationTimestamp="2025-03-17 17:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:48:43.853093201 +0000 UTC m=+136.657759214" watchObservedRunningTime="2025-03-17 17:48:43.853372907 +0000 UTC m=+136.658038920" Mar 17 17:48:44.580310 kubelet[2656]: E0317 17:48:44.580269 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:45.582773 kubelet[2656]: E0317 17:48:45.582720 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:47.006651 systemd-networkd[1422]: lxc_health: Link UP Mar 17 17:48:47.019232 systemd-networkd[1422]: lxc_health: Gained carrier Mar 17 17:48:48.066135 kubelet[2656]: E0317 17:48:48.066087 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:48.466904 systemd-networkd[1422]: lxc_health: Gained IPv6LL Mar 17 17:48:48.588080 kubelet[2656]: E0317 17:48:48.588024 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:49.036636 systemd[1]: run-containerd-runc-k8s.io-60426fa642d652dddb8eb8db696d68675f9d7916e812a02b80a528e3570c2d43-runc.lVCbTM.mount: Deactivated successfully. Mar 17 17:48:49.590937 kubelet[2656]: E0317 17:48:49.590820 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:53.279577 kubelet[2656]: E0317 17:48:53.279534 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:53.325670 kubelet[2656]: E0317 17:48:53.325555 2656 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53966->127.0.0.1:39961: write tcp 127.0.0.1:53966->127.0.0.1:39961: write: broken pipe Mar 17 17:48:55.278915 kubelet[2656]: E0317 17:48:55.278844 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:48:57.528478 sshd[4669]: Connection closed by 10.0.0.1 port 47536 Mar 17 17:48:57.529023 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Mar 17 17:48:57.533279 systemd[1]: sshd@34-10.0.0.15:22-10.0.0.1:47536.service: Deactivated successfully. Mar 17 17:48:57.535625 systemd[1]: session-35.scope: Deactivated successfully. Mar 17 17:48:57.536483 systemd-logind[1491]: Session 35 logged out. Waiting for processes to exit. Mar 17 17:48:57.537434 systemd-logind[1491]: Removed session 35.