Feb 13 19:37:01.874942 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:37:01.874963 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:37:01.874974 kernel: BIOS-provided physical RAM map: Feb 13 19:37:01.874980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:37:01.874986 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:37:01.874993 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:37:01.875000 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:37:01.875006 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:37:01.875012 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:37:01.875020 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:37:01.875027 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:37:01.875033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:37:01.875039 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:37:01.875045 kernel: NX (Execute Disable) protection: active Feb 13 19:37:01.875053 kernel: APIC: Static calls initialized Feb 13 19:37:01.875063 kernel: SMBIOS 2.8 present. Feb 13 19:37:01.875069 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:37:01.875076 kernel: Hypervisor detected: KVM Feb 13 19:37:01.875083 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:37:01.875089 kernel: kvm-clock: using sched offset of 2282403879 cycles Feb 13 19:37:01.875096 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:37:01.875103 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:37:01.875111 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:37:01.875118 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:37:01.875125 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:37:01.875134 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:37:01.875141 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:37:01.875147 kernel: Using GB pages for direct mapping Feb 13 19:37:01.875154 kernel: ACPI: Early table checksum verification disabled Feb 13 19:37:01.875161 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:37:01.875179 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875186 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875193 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875202 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:37:01.875209 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875216 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875323 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875331 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:37:01.875339 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:37:01.875346 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:37:01.875356 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:37:01.875365 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:37:01.875372 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:37:01.875380 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:37:01.875387 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:37:01.875394 kernel: No NUMA configuration found Feb 13 19:37:01.875401 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:37:01.875408 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:37:01.875418 kernel: Zone ranges: Feb 13 19:37:01.875426 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:37:01.875434 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:37:01.875442 kernel: Normal empty Feb 13 19:37:01.875451 kernel: Movable zone start for each node Feb 13 19:37:01.875458 kernel: Early memory node ranges Feb 13 19:37:01.875465 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:37:01.875472 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:37:01.875480 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:37:01.875489 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:37:01.875496 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:37:01.875503 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:37:01.875511 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:37:01.875518 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:37:01.875525 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:37:01.875532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:37:01.875539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:37:01.875546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:37:01.875555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:37:01.875563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:37:01.875570 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:37:01.875577 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:37:01.875584 kernel: TSC deadline timer available Feb 13 19:37:01.875591 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:37:01.875598 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:37:01.875605 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:37:01.875612 kernel: kvm-guest: setup PV sched yield Feb 13 19:37:01.875622 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:37:01.875629 kernel: Booting paravirtualized kernel on KVM Feb 13 19:37:01.875636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:37:01.875643 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:37:01.875651 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:37:01.875658 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:37:01.875665 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:37:01.875672 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:37:01.875679 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:37:01.875687 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:37:01.875697 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:37:01.875704 kernel: random: crng init done Feb 13 19:37:01.875711 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:37:01.875718 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:37:01.875726 kernel: Fallback order for Node 0: 0 Feb 13 19:37:01.875733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:37:01.875740 kernel: Policy zone: DMA32 Feb 13 19:37:01.875747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:37:01.875757 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Feb 13 19:37:01.875764 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:37:01.875771 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:37:01.875778 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:37:01.875785 kernel: Dynamic Preempt: voluntary Feb 13 19:37:01.875792 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:37:01.875800 kernel: rcu: RCU event tracing is enabled. Feb 13 19:37:01.875808 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:37:01.875815 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:37:01.875825 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:37:01.875832 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:37:01.875840 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:37:01.875847 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:37:01.875854 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:37:01.875861 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:37:01.875868 kernel: Console: colour VGA+ 80x25 Feb 13 19:37:01.875875 kernel: printk: console [ttyS0] enabled Feb 13 19:37:01.875882 kernel: ACPI: Core revision 20230628 Feb 13 19:37:01.875891 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:37:01.875899 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:37:01.875906 kernel: x2apic enabled Feb 13 19:37:01.875913 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:37:01.875920 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:37:01.875927 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:37:01.875935 kernel: kvm-guest: setup PV IPIs Feb 13 19:37:01.875951 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:37:01.875959 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:37:01.875966 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:37:01.875973 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:37:01.875981 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:37:01.875990 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:37:01.875998 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:37:01.876005 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:37:01.876013 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:37:01.876023 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:37:01.876030 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:37:01.876038 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:37:01.876045 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:37:01.876053 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:37:01.876060 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:37:01.876068 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:37:01.876076 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:37:01.876083 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:37:01.876093 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:37:01.876101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:37:01.876108 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:37:01.876116 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:37:01.876123 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:37:01.876130 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:37:01.876138 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:37:01.876145 kernel: landlock: Up and running. Feb 13 19:37:01.876153 kernel: SELinux: Initializing. Feb 13 19:37:01.876162 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:37:01.876178 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:37:01.876186 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:37:01.876193 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:37:01.876201 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:37:01.876209 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:37:01.876217 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:37:01.876236 kernel: ... version: 0 Feb 13 19:37:01.876247 kernel: ... bit width: 48 Feb 13 19:37:01.876254 kernel: ... generic registers: 6 Feb 13 19:37:01.876262 kernel: ... value mask: 0000ffffffffffff Feb 13 19:37:01.876269 kernel: ... max period: 00007fffffffffff Feb 13 19:37:01.876276 kernel: ... fixed-purpose events: 0 Feb 13 19:37:01.876284 kernel: ... event mask: 000000000000003f Feb 13 19:37:01.876291 kernel: signal: max sigframe size: 1776 Feb 13 19:37:01.876299 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:37:01.876306 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:37:01.876314 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:37:01.876323 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:37:01.876331 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:37:01.876338 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:37:01.876346 kernel: smpboot: Max logical packages: 1 Feb 13 19:37:01.876353 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:37:01.876360 kernel: devtmpfs: initialized Feb 13 19:37:01.876368 kernel: x86/mm: Memory block size: 128MB Feb 13 19:37:01.876375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:37:01.876383 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:37:01.876393 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:37:01.876400 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:37:01.876408 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:37:01.876415 kernel: audit: type=2000 audit(1739475421.410:1): state=initialized audit_enabled=0 res=1 Feb 13 19:37:01.876422 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:37:01.876430 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:37:01.876438 kernel: cpuidle: using governor menu Feb 13 19:37:01.876445 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:37:01.876454 kernel: dca service started, version 1.12.1 Feb 13 19:37:01.876465 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:37:01.876474 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:37:01.876482 kernel: PCI: Using configuration type 1 for base access Feb 13 19:37:01.876490 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:37:01.876497 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:37:01.876505 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:37:01.876512 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:37:01.876520 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:37:01.876527 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:37:01.876537 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:37:01.876545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:37:01.876555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:37:01.876564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:37:01.876573 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:37:01.876582 kernel: ACPI: Interpreter enabled Feb 13 19:37:01.876592 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:37:01.876602 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:37:01.876612 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:37:01.876625 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:37:01.876633 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:37:01.876640 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:37:01.876824 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:37:01.876954 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:37:01.877075 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:37:01.877085 kernel: PCI host bridge to bus 0000:00 Feb 13 19:37:01.877236 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:37:01.877355 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:37:01.877466 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:37:01.877576 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:37:01.877687 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:37:01.877797 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:37:01.877907 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:37:01.878058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:37:01.878201 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:37:01.878340 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:37:01.878467 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:37:01.878587 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:37:01.878706 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:37:01.878872 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:37:01.878997 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:37:01.879119 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:37:01.879269 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:37:01.879399 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:37:01.879521 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:37:01.879642 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:37:01.879768 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:37:01.879897 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:37:01.880018 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:37:01.880137 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:37:01.880307 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:37:01.880428 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:37:01.880560 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:37:01.880687 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:37:01.880824 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:37:01.880945 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:37:01.881152 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:37:01.881321 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:37:01.881444 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:37:01.881455 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:37:01.881468 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:37:01.881475 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:37:01.881483 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:37:01.881490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:37:01.881498 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:37:01.881505 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:37:01.881513 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:37:01.881521 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:37:01.881528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:37:01.881538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:37:01.881545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:37:01.881553 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:37:01.881560 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:37:01.881568 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:37:01.881575 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:37:01.881583 kernel: iommu: Default domain type: Translated Feb 13 19:37:01.881590 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:37:01.881598 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:37:01.881607 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:37:01.881615 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:37:01.881622 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:37:01.881745 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:37:01.881864 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:37:01.881995 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:37:01.882010 kernel: vgaarb: loaded Feb 13 19:37:01.882020 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:37:01.882031 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:37:01.882039 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:37:01.882046 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:37:01.882054 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:37:01.882062 kernel: pnp: PnP ACPI init Feb 13 19:37:01.882205 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:37:01.882217 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:37:01.882237 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:37:01.882248 kernel: NET: Registered PF_INET protocol family Feb 13 19:37:01.882256 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:37:01.882263 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:37:01.882275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:37:01.882286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:37:01.882300 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:37:01.882318 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:37:01.882331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:37:01.882348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:37:01.882369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:37:01.882386 kernel: NET: Registered PF_XDP protocol family Feb 13 19:37:01.882537 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:37:01.882696 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:37:01.882837 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:37:01.882952 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:37:01.883060 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:37:01.883177 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:37:01.883192 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:37:01.883200 kernel: Initialise system trusted keyrings Feb 13 19:37:01.883207 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:37:01.883215 kernel: Key type asymmetric registered Feb 13 19:37:01.883235 kernel: Asymmetric key parser 'x509' registered Feb 13 19:37:01.883243 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:37:01.883250 kernel: io scheduler mq-deadline registered Feb 13 19:37:01.883258 kernel: io scheduler kyber registered Feb 13 19:37:01.883265 kernel: io scheduler bfq registered Feb 13 19:37:01.883276 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:37:01.883284 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:37:01.883291 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:37:01.883299 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:37:01.883306 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:37:01.883314 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:37:01.883322 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:37:01.883329 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:37:01.883337 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:37:01.883472 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:37:01.883483 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:37:01.883595 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:37:01.883709 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:37:01 UTC (1739475421) Feb 13 19:37:01.883822 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:37:01.883841 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:37:01.883857 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:37:01.883873 kernel: Segment Routing with IPv6 Feb 13 19:37:01.883899 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:37:01.883907 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:37:01.883915 kernel: Key type dns_resolver registered Feb 13 19:37:01.883922 kernel: IPI shorthand broadcast: enabled Feb 13 19:37:01.883930 kernel: sched_clock: Marking stable (586002714, 104912571)->(710355326, -19440041) Feb 13 19:37:01.883937 kernel: registered taskstats version 1 Feb 13 19:37:01.883945 kernel: Loading compiled-in X.509 certificates Feb 13 19:37:01.883953 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:37:01.883960 kernel: Key type .fscrypt registered Feb 13 19:37:01.883970 kernel: Key type fscrypt-provisioning registered Feb 13 19:37:01.883982 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:37:01.883989 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:37:01.883997 kernel: ima: No architecture policies found Feb 13 19:37:01.884004 kernel: clk: Disabling unused clocks Feb 13 19:37:01.884011 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:37:01.884019 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:37:01.884026 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:37:01.884034 kernel: Run /init as init process Feb 13 19:37:01.884044 kernel: with arguments: Feb 13 19:37:01.884051 kernel: /init Feb 13 19:37:01.884058 kernel: with environment: Feb 13 19:37:01.884066 kernel: HOME=/ Feb 13 19:37:01.884073 kernel: TERM=linux Feb 13 19:37:01.884080 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:37:01.884090 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:37:01.884099 systemd[1]: Detected virtualization kvm. Feb 13 19:37:01.884110 systemd[1]: Detected architecture x86-64. Feb 13 19:37:01.884118 systemd[1]: Running in initrd. Feb 13 19:37:01.884126 systemd[1]: No hostname configured, using default hostname. Feb 13 19:37:01.884134 systemd[1]: Hostname set to . Feb 13 19:37:01.884142 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:37:01.884150 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:37:01.884158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:37:01.884173 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:37:01.884184 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:37:01.884203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:37:01.884214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:37:01.884283 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:37:01.884293 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:37:01.884305 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:37:01.884313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:37:01.884322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:37:01.884330 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:37:01.884338 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:37:01.884346 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:37:01.884354 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:37:01.884363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:37:01.884373 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:37:01.884382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:37:01.884390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:37:01.884399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:37:01.884407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:37:01.884417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:37:01.884426 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:37:01.884434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:37:01.884446 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:37:01.884456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:37:01.884465 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:37:01.884475 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:37:01.884483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:37:01.884491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:37:01.884499 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:37:01.884508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:37:01.884516 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:37:01.884527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:37:01.884554 systemd-journald[191]: Collecting audit messages is disabled. Feb 13 19:37:01.884576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:37:01.884585 systemd-journald[191]: Journal started Feb 13 19:37:01.884605 systemd-journald[191]: Runtime Journal (/run/log/journal/06aa881d5bf14e05b1efcff2bb376694) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:37:01.882768 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:37:01.914487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:37:01.914507 kernel: Bridge firewalling registered Feb 13 19:37:01.908943 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:37:01.917271 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:37:01.917635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:37:01.919979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:37:01.933397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:37:01.936555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:37:01.937350 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:37:01.941348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:37:01.950992 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:37:01.953690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:37:01.956259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:37:01.964386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:37:01.964675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:37:01.969585 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:37:01.977131 dracut-cmdline[229]: dracut-dracut-053 Feb 13 19:37:01.980325 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:37:02.009188 systemd-resolved[233]: Positive Trust Anchors: Feb 13 19:37:02.009206 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:37:02.009253 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:37:02.011681 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 19:37:02.012718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:37:02.018906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:37:02.064277 kernel: SCSI subsystem initialized Feb 13 19:37:02.073247 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:37:02.084265 kernel: iscsi: registered transport (tcp) Feb 13 19:37:02.105259 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:37:02.105286 kernel: QLogic iSCSI HBA Driver Feb 13 19:37:02.158676 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:37:02.173367 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:37:02.200257 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:37:02.200318 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:37:02.202108 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:37:02.244251 kernel: raid6: avx2x4 gen() 30645 MB/s Feb 13 19:37:02.261252 kernel: raid6: avx2x2 gen() 31037 MB/s Feb 13 19:37:02.278323 kernel: raid6: avx2x1 gen() 26069 MB/s Feb 13 19:37:02.278354 kernel: raid6: using algorithm avx2x2 gen() 31037 MB/s Feb 13 19:37:02.296344 kernel: raid6: .... xor() 19879 MB/s, rmw enabled Feb 13 19:37:02.296366 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:37:02.317273 kernel: xor: automatically using best checksumming function avx Feb 13 19:37:02.489288 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:37:02.502218 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:37:02.514367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:37:02.526301 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:37:02.530744 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:37:02.552463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:37:02.565448 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Feb 13 19:37:02.598396 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:37:02.610404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:37:02.672668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:37:02.684475 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:37:02.699141 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:37:02.700772 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:37:02.700855 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:37:02.707134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:37:02.712256 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:37:02.734628 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:37:02.734780 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:37:02.734792 kernel: GPT:9289727 != 19775487 Feb 13 19:37:02.734802 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:37:02.734812 kernel: GPT:9289727 != 19775487 Feb 13 19:37:02.734822 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:37:02.734838 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:37:02.721454 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:37:02.736250 kernel: libata version 3.00 loaded. Feb 13 19:37:02.736993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:37:02.739317 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:37:02.750258 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:37:02.774793 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:37:02.774816 kernel: AES CTR mode by8 optimization enabled Feb 13 19:37:02.774830 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:37:02.774844 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:37:02.775015 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:37:02.775190 kernel: scsi host0: ahci Feb 13 19:37:02.775373 kernel: scsi host1: ahci Feb 13 19:37:02.775523 kernel: scsi host2: ahci Feb 13 19:37:02.775670 kernel: scsi host3: ahci Feb 13 19:37:02.775813 kernel: scsi host4: ahci Feb 13 19:37:02.775958 kernel: scsi host5: ahci Feb 13 19:37:02.776106 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:37:02.776118 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:37:02.776128 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:37:02.776150 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:37:02.776160 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:37:02.776171 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:37:02.755518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:37:02.755639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:37:02.782531 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Feb 13 19:37:02.782592 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) Feb 13 19:37:02.758626 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:37:02.760303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:37:02.760454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:37:02.765301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:37:02.788328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:37:02.805818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:37:02.840522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:37:02.849954 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:37:02.850058 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:37:02.859674 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:37:02.864032 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:37:02.880357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:37:02.881209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:37:02.895995 disk-uuid[559]: Primary Header is updated. Feb 13 19:37:02.895995 disk-uuid[559]: Secondary Entries is updated. Feb 13 19:37:02.895995 disk-uuid[559]: Secondary Header is updated. Feb 13 19:37:02.900841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:37:02.905004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:37:02.909243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:37:03.087795 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:37:03.087879 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:37:03.087894 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:37:03.089256 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:37:03.090254 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:37:03.091258 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:37:03.091274 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:37:03.092282 kernel: ata3.00: applying bridge limits Feb 13 19:37:03.093264 kernel: ata3.00: configured for UDMA/100 Feb 13 19:37:03.093295 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:37:03.139268 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:37:03.153100 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:37:03.153119 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:37:03.908277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:37:03.908894 disk-uuid[565]: The operation has completed successfully. Feb 13 19:37:03.932636 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:37:03.932766 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:37:03.957378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:37:03.961411 sh[597]: Success Feb 13 19:37:03.975249 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:37:04.007553 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:37:04.027847 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:37:04.031206 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:37:04.043319 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:37:04.043347 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:37:04.043358 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:37:04.044330 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:37:04.045654 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:37:04.049593 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:37:04.051869 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:37:04.066331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:37:04.068832 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:37:04.077323 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:37:04.077354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:37:04.077365 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:37:04.080250 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:37:04.089007 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:37:04.090597 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:37:04.100542 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:37:04.112408 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:37:04.163148 ignition[689]: Ignition 2.20.0 Feb 13 19:37:04.163160 ignition[689]: Stage: fetch-offline Feb 13 19:37:04.163199 ignition[689]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:04.163209 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:04.163316 ignition[689]: parsed url from cmdline: "" Feb 13 19:37:04.163320 ignition[689]: no config URL provided Feb 13 19:37:04.163326 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:37:04.163335 ignition[689]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:37:04.163366 ignition[689]: op(1): [started] loading QEMU firmware config module Feb 13 19:37:04.163372 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:37:04.177218 ignition[689]: op(1): [finished] loading QEMU firmware config module Feb 13 19:37:04.189007 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:37:04.199386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:37:04.220778 ignition[689]: parsing config with SHA512: c296a82590175540ac77822e91e045ba8e71740863a9bc39fa971a736df64d2c10ad2ec0d67cd46f1879eab86a0ded99971d9db3361ebe1e12e79f66e98bf90d Feb 13 19:37:04.221395 systemd-networkd[785]: lo: Link UP Feb 13 19:37:04.221405 systemd-networkd[785]: lo: Gained carrier Feb 13 19:37:04.224324 systemd-networkd[785]: Enumeration completed Feb 13 19:37:04.225118 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:37:04.225443 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:37:04.227866 ignition[689]: fetch-offline: fetch-offline passed Feb 13 19:37:04.225447 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:37:04.227943 ignition[689]: Ignition finished successfully Feb 13 19:37:04.225693 unknown[689]: fetched base config from "system" Feb 13 19:37:04.225705 unknown[689]: fetched user config from "qemu" Feb 13 19:37:04.226684 systemd-networkd[785]: eth0: Link UP Feb 13 19:37:04.226688 systemd-networkd[785]: eth0: Gained carrier Feb 13 19:37:04.226694 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:37:04.228294 systemd[1]: Reached target network.target - Network. Feb 13 19:37:04.230488 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:37:04.234276 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:37:04.239301 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:37:04.239352 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:37:04.253955 ignition[788]: Ignition 2.20.0 Feb 13 19:37:04.253966 ignition[788]: Stage: kargs Feb 13 19:37:04.254176 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:04.254188 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:04.254992 ignition[788]: kargs: kargs passed Feb 13 19:37:04.258319 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:37:04.255039 ignition[788]: Ignition finished successfully Feb 13 19:37:04.269354 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:37:04.280812 ignition[798]: Ignition 2.20.0 Feb 13 19:37:04.280823 ignition[798]: Stage: disks Feb 13 19:37:04.280986 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:04.281001 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:04.283677 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:37:04.281860 ignition[798]: disks: disks passed Feb 13 19:37:04.286346 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:37:04.281904 ignition[798]: Ignition finished successfully Feb 13 19:37:04.287735 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:37:04.289681 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:37:04.291813 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:37:04.293670 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:37:04.306698 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:37:04.318614 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:37:04.326016 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:37:04.338402 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:37:04.424263 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:37:04.424983 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:37:04.425835 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:37:04.431385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:37:04.434688 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:37:04.436429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:37:04.436484 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:37:04.448697 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Feb 13 19:37:04.448729 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:37:04.448744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:37:04.448757 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:37:04.436514 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:37:04.452364 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:37:04.443850 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:37:04.449750 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:37:04.454405 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:37:04.492126 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:37:04.498329 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:37:04.502358 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:37:04.506469 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:37:04.595490 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:37:04.613347 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:37:04.615067 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:37:04.621243 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:37:04.639825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:37:04.640886 ignition[929]: INFO : Ignition 2.20.0 Feb 13 19:37:04.640886 ignition[929]: INFO : Stage: mount Feb 13 19:37:04.640886 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:04.640886 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:04.645428 ignition[929]: INFO : mount: mount passed Feb 13 19:37:04.645428 ignition[929]: INFO : Ignition finished successfully Feb 13 19:37:04.645214 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:37:04.656384 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:37:05.043117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:37:05.056380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:37:05.064257 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 19:37:05.066341 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:37:05.066367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:37:05.066382 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:37:05.070244 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:37:05.071407 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:37:05.088656 ignition[959]: INFO : Ignition 2.20.0 Feb 13 19:37:05.088656 ignition[959]: INFO : Stage: files Feb 13 19:37:05.090375 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:05.090375 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:05.090375 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:37:05.093775 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:37:05.093775 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:37:05.096529 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:37:05.097930 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:37:05.097930 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:37:05.097199 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 19:37:05.101796 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:37:05.101796 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:37:05.254975 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:37:05.472435 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 19:37:05.694161 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:37:05.694161 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:37:05.698198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:37:06.236391 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:37:06.339644 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:37:06.341649 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:37:06.626809 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:37:07.022929 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:37:07.022929 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 19:37:07.026667 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:37:07.046538 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:37:07.052337 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:37:07.054406 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:37:07.054406 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:37:07.057670 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:37:07.059372 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:37:07.061371 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:37:07.063219 ignition[959]: INFO : files: files passed Feb 13 19:37:07.063997 ignition[959]: INFO : Ignition finished successfully Feb 13 19:37:07.067519 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:37:07.074427 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:37:07.076365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:37:07.078629 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:37:07.078763 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:37:07.087218 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:37:07.090338 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:37:07.090338 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:37:07.093587 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:37:07.092780 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:37:07.095015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:37:07.108408 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:37:07.137529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:37:07.137672 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:37:07.139029 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:37:07.142207 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:37:07.142493 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:37:07.155392 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:37:07.171658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:37:07.188396 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:37:07.199563 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:37:07.199708 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:37:07.201882 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:37:07.202207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:37:07.202323 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:37:07.203040 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:37:07.203548 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:37:07.203871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:37:07.204208 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:37:07.204707 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:37:07.205051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:37:07.205543 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:37:07.205882 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:37:07.206221 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:37:07.206711 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:37:07.207023 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:37:07.207119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:37:07.228441 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:37:07.229474 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:37:07.229773 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:37:07.233564 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:37:07.234569 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:37:07.234672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:37:07.235257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:37:07.235361 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:37:07.235829 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:37:07.236087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:37:07.247337 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:37:07.248732 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:37:07.251084 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:37:07.252897 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:37:07.252998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:37:07.253881 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:37:07.253978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:37:07.255554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:37:07.255676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:37:07.257531 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:37:07.257632 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:37:07.270364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:37:07.271307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:37:07.271419 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:37:07.273030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:37:07.275216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:37:07.275342 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:37:07.275681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:37:07.275775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:37:07.283920 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:37:07.284037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:37:07.297960 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 19:37:07.297960 ignition[1013]: INFO : Stage: umount Feb 13 19:37:07.299637 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:37:07.299637 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:37:07.299637 ignition[1013]: INFO : umount: umount passed Feb 13 19:37:07.299637 ignition[1013]: INFO : Ignition finished successfully Feb 13 19:37:07.302690 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:37:07.306689 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:37:07.306821 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:37:07.309709 systemd[1]: Stopped target network.target - Network. Feb 13 19:37:07.309781 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:37:07.309832 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:37:07.310160 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:37:07.310201 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:37:07.310670 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:37:07.310716 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:37:07.310996 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:37:07.311046 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:37:07.311660 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:37:07.318650 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:37:07.325269 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 19:37:07.327615 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:37:07.327772 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:37:07.329331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:37:07.329375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:37:07.340367 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:37:07.341368 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:37:07.341432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:37:07.341616 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:37:07.341954 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:37:07.342085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:37:07.346667 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:37:07.346762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:37:07.347908 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:37:07.347956 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:37:07.349114 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:37:07.349167 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:37:07.376438 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:37:07.376624 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:37:07.377791 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:37:07.377845 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:37:07.379888 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:37:07.379928 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:37:07.380206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:37:07.380261 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:37:07.381200 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:37:07.381256 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:37:07.381894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:37:07.381936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:37:07.392560 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:37:07.392796 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:37:07.392845 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:37:07.393184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:37:07.393241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:37:07.404401 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:37:07.404529 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:37:07.408619 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:37:07.408735 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:37:07.511201 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:37:07.511358 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:37:07.513462 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:37:07.515118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:37:07.515169 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:37:07.527367 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:37:07.534419 systemd[1]: Switching root. Feb 13 19:37:07.564601 systemd-journald[191]: Journal stopped Feb 13 19:37:08.757195 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Feb 13 19:37:08.757316 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:37:08.757336 kernel: SELinux: policy capability open_perms=1 Feb 13 19:37:08.757348 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:37:08.757359 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:37:08.757370 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:37:08.757386 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:37:08.757401 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:37:08.757412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:37:08.757423 kernel: audit: type=1403 audit(1739475428.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:37:08.757441 systemd[1]: Successfully loaded SELinux policy in 39.208ms. Feb 13 19:37:08.757466 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.292ms. Feb 13 19:37:08.757479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:37:08.757491 systemd[1]: Detected virtualization kvm. Feb 13 19:37:08.757503 systemd[1]: Detected architecture x86-64. Feb 13 19:37:08.757515 systemd[1]: Detected first boot. Feb 13 19:37:08.757529 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:37:08.757543 zram_generator::config[1058]: No configuration found. Feb 13 19:37:08.757556 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:37:08.757568 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:37:08.757580 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:37:08.757592 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:37:08.757610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:37:08.757622 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:37:08.757637 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:37:08.757649 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:37:08.757661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:37:08.757673 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:37:08.757685 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:37:08.757697 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:37:08.757709 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:37:08.757721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:37:08.757733 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:37:08.757747 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:37:08.757759 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:37:08.757771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:37:08.757783 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:37:08.757795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:37:08.757808 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:37:08.757820 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:37:08.757832 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:37:08.757846 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:37:08.757858 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:37:08.757871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:37:08.757883 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:37:08.757894 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:37:08.757907 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:37:08.757918 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:37:08.757931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:37:08.757945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:37:08.757957 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:37:08.757969 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:37:08.757989 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:37:08.758001 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:37:08.758013 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:37:08.758025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:08.758037 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:37:08.758049 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:37:08.758063 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:37:08.758077 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:37:08.758089 systemd[1]: Reached target machines.target - Containers. Feb 13 19:37:08.758101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:37:08.758113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:37:08.758125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:37:08.758137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:37:08.758149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:37:08.758163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:37:08.758175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:37:08.758187 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:37:08.758199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:37:08.758211 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:37:08.758236 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:37:08.758248 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:37:08.758265 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:37:08.758277 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:37:08.758291 kernel: fuse: init (API version 7.39) Feb 13 19:37:08.758303 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:37:08.758315 kernel: loop: module loaded Feb 13 19:37:08.758326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:37:08.758338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:37:08.758352 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:37:08.758363 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:37:08.758376 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:37:08.758388 systemd[1]: Stopped verity-setup.service. Feb 13 19:37:08.758402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:08.758431 systemd-journald[1132]: Collecting audit messages is disabled. Feb 13 19:37:08.758453 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:37:08.758465 kernel: ACPI: bus type drm_connector registered Feb 13 19:37:08.758477 systemd-journald[1132]: Journal started Feb 13 19:37:08.758499 systemd-journald[1132]: Runtime Journal (/run/log/journal/06aa881d5bf14e05b1efcff2bb376694) is 6.0M, max 48.4M, 42.3M free. Feb 13 19:37:08.523698 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:37:08.546256 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:37:08.546696 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:37:08.761288 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:37:08.762300 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:37:08.763516 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:37:08.764602 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:37:08.765786 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:37:08.767011 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:37:08.768268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:37:08.769716 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:37:08.771308 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:37:08.771573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:37:08.773070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:37:08.773468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:37:08.774879 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:37:08.775068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:37:08.776431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:37:08.776599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:37:08.778123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:37:08.778309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:37:08.779675 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:37:08.779842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:37:08.781220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:37:08.782838 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:37:08.784429 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:37:08.800849 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:37:08.812309 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:37:08.814816 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:37:08.815995 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:37:08.816038 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:37:08.818441 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:37:08.820942 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:37:08.825850 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:37:08.827073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:37:08.829565 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:37:08.831781 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:37:08.832956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:37:08.836071 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:37:08.837700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:37:08.846042 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:37:08.866241 systemd-journald[1132]: Time spent on flushing to /var/log/journal/06aa881d5bf14e05b1efcff2bb376694 is 24.223ms for 952 entries. Feb 13 19:37:08.866241 systemd-journald[1132]: System Journal (/var/log/journal/06aa881d5bf14e05b1efcff2bb376694) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:37:08.906779 systemd-journald[1132]: Received client request to flush runtime journal. Feb 13 19:37:08.906817 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:37:08.868617 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:37:08.886457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:37:08.893431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:37:08.894888 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:37:08.896272 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:37:08.897768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:37:08.899958 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:37:08.906679 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:37:08.917409 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:37:08.922091 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:37:08.924034 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:37:08.925916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:37:08.937619 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:37:08.939614 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:37:08.945318 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:37:08.955472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:37:08.957666 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:37:08.958407 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:37:08.966252 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 19:37:08.995551 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:37:08.995573 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:37:09.020249 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 19:37:09.023149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:37:09.056255 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:37:09.076252 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 19:37:09.085253 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 19:37:09.095251 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:37:09.096757 (sd-merge)[1196]: Merged extensions into '/usr'. Feb 13 19:37:09.102794 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:37:09.102925 systemd[1]: Reloading... Feb 13 19:37:09.194250 zram_generator::config[1222]: No configuration found. Feb 13 19:37:09.307347 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:37:09.310816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:37:09.360128 systemd[1]: Reloading finished in 256 ms. Feb 13 19:37:09.392858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:37:09.394413 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:37:09.409396 systemd[1]: Starting ensure-sysext.service... Feb 13 19:37:09.411220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:37:09.421257 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:37:09.421272 systemd[1]: Reloading... Feb 13 19:37:09.455105 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:37:09.455475 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:37:09.456460 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:37:09.456769 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:37:09.456848 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Feb 13 19:37:09.460105 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:37:09.460119 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:37:09.475189 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:37:09.475205 systemd-tmpfiles[1260]: Skipping /boot Feb 13 19:37:09.499256 zram_generator::config[1287]: No configuration found. Feb 13 19:37:09.609469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:37:09.659445 systemd[1]: Reloading finished in 237 ms. Feb 13 19:37:09.676632 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:37:09.694457 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:37:09.696869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:37:09.699633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:37:09.703681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:37:09.708415 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:37:09.715574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:37:09.722015 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.722533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:37:09.728717 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:37:09.732596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:37:09.735467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:37:09.758332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:37:09.758460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.760083 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:37:09.762259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:37:09.762457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:37:09.768869 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:37:09.770875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:37:09.771061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:37:09.772637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:37:09.774345 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:37:09.774540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:37:09.785068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.785317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:37:09.787334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:37:09.788560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:37:09.788670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:37:09.788755 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.793752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.794055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:37:09.795415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:37:09.800738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:37:09.803039 augenrules[1366]: No rules Feb 13 19:37:09.804470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:37:09.805716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:37:09.805900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:37:09.807161 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:37:09.807454 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:37:09.809206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:37:09.809389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:37:09.811142 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:37:09.811320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:37:09.812899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:37:09.813071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:37:09.814768 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:37:09.814965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:37:09.819374 systemd[1]: Finished ensure-sysext.service. Feb 13 19:37:09.824090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:37:09.824152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:37:09.832389 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:37:09.844311 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:37:09.845565 systemd-resolved[1328]: Positive Trust Anchors: Feb 13 19:37:09.845586 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:37:09.845622 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:37:09.845824 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:37:09.849215 systemd-resolved[1328]: Defaulting to hostname 'linux'. Feb 13 19:37:09.850957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:37:09.852167 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:37:09.881987 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:37:09.883547 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:37:09.895925 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:37:09.906510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:37:09.909050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:37:09.925733 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:37:09.932481 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Feb 13 19:37:09.952326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:37:09.964419 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:37:09.984485 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:37:10.107279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1388) Feb 13 19:37:10.127947 systemd-networkd[1391]: lo: Link UP Feb 13 19:37:10.127960 systemd-networkd[1391]: lo: Gained carrier Feb 13 19:37:10.129241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:37:10.131713 systemd-networkd[1391]: Enumeration completed Feb 13 19:37:10.131793 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:37:10.133028 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:37:10.133333 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:37:10.137551 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:37:10.134498 systemd[1]: Reached target network.target - Network. Feb 13 19:37:10.134992 systemd-networkd[1391]: eth0: Link UP Feb 13 19:37:10.134998 systemd-networkd[1391]: eth0: Gained carrier Feb 13 19:37:10.135013 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:37:10.143633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:37:10.153141 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.63/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:37:10.156077 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Feb 13 19:37:10.156129 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:37:10.658378 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:37:10.658458 systemd-timesyncd[1378]: Initial clock synchronization to Thu 2025-02-13 19:37:10.658162 UTC. Feb 13 19:37:10.658896 systemd-resolved[1328]: Clock change detected. Flushing caches. Feb 13 19:37:10.659807 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:37:10.661114 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:37:10.661361 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:37:10.669267 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:37:10.692213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:37:10.708269 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:37:10.710091 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:37:10.714736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:37:10.737671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:37:10.808580 kernel: kvm_amd: TSC scaling supported Feb 13 19:37:10.808639 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:37:10.808653 kernel: kvm_amd: Nested Paging enabled Feb 13 19:37:10.809907 kernel: kvm_amd: LBR virtualization supported Feb 13 19:37:10.809934 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:37:10.810678 kernel: kvm_amd: Virtual GIF supported Feb 13 19:37:10.835264 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:37:10.870535 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:37:10.872758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:37:10.883692 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:37:10.892395 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:37:10.921672 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:37:10.958104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:37:10.959470 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:37:10.960832 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:37:10.962333 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:37:10.964017 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:37:10.965397 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:37:10.966822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:37:10.968268 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:37:10.968305 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:37:10.969350 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:37:10.971343 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:37:10.974825 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:37:10.985017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:37:10.987541 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:37:10.989414 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:37:10.990741 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:37:10.991862 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:37:10.992988 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:37:10.993028 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:37:10.994232 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:37:10.996671 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:37:11.001360 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:37:11.005256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:37:11.007074 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:37:11.008377 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:37:11.009887 jq[1435]: false Feb 13 19:37:11.012310 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:37:11.013359 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:37:11.021391 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:37:11.024025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:37:11.028497 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:37:11.031412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:37:11.031852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:37:11.032521 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:37:11.034740 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:37:11.034958 dbus-daemon[1434]: [system] SELinux support is enabled Feb 13 19:37:11.037225 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:37:11.041340 extend-filesystems[1436]: Found loop3 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found loop4 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found loop5 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found sr0 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda1 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda2 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda3 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found usr Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda4 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda6 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda7 Feb 13 19:37:11.041340 extend-filesystems[1436]: Found vda9 Feb 13 19:37:11.041340 extend-filesystems[1436]: Checking size of /dev/vda9 Feb 13 19:37:11.189789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1399) Feb 13 19:37:11.042613 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:37:11.190134 extend-filesystems[1436]: Resized partition /dev/vda9 Feb 13 19:37:11.191067 update_engine[1447]: I20250213 19:37:11.056844 1447 main.cc:92] Flatcar Update Engine starting Feb 13 19:37:11.191067 update_engine[1447]: I20250213 19:37:11.058192 1447 update_check_scheduler.cc:74] Next update check in 11m6s Feb 13 19:37:11.042813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:37:11.191653 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:37:11.192720 jq[1448]: true Feb 13 19:37:11.055851 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:37:11.192956 tar[1455]: linux-amd64/helm Feb 13 19:37:11.062371 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:37:11.062583 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:37:11.193396 jq[1461]: true Feb 13 19:37:11.064041 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:37:11.064299 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:37:11.119838 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:37:11.125980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:37:11.127319 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:37:11.127338 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:37:11.128656 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:37:11.128669 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:37:11.133425 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:37:11.145463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:37:11.179457 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:37:11.179479 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:37:11.180167 systemd-logind[1445]: New seat seat0. Feb 13 19:37:11.194844 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:37:11.252274 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:37:11.373145 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:37:11.482494 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:37:11.509284 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:37:11.563317 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:37:11.568199 systemd[1]: Started sshd@0-10.0.0.63:22-10.0.0.1:51830.service - OpenSSH per-connection server daemon (10.0.0.1:51830). Feb 13 19:37:11.570906 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:37:11.571157 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:37:11.575012 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:37:11.640330 tar[1455]: linux-amd64/LICENSE Feb 13 19:37:11.640443 tar[1455]: linux-amd64/README.md Feb 13 19:37:11.659022 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:37:11.691470 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:37:11.708658 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:37:11.741004 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:37:11.742261 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:37:11.841272 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:37:11.920958 sshd[1510]: Connection closed by authenticating user core 10.0.0.1 port 51830 [preauth] Feb 13 19:37:11.872801 systemd[1]: sshd@0-10.0.0.63:22-10.0.0.1:51830.service: Deactivated successfully. Feb 13 19:37:11.921379 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:37:11.921379 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:37:11.921379 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:37:11.926172 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Feb 13 19:37:11.925141 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:37:11.925464 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:37:11.934012 bash[1489]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:37:11.935993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:37:11.938283 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:37:11.958789 containerd[1462]: time="2025-02-13T19:37:11.958697818Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:37:11.983639 containerd[1462]: time="2025-02-13T19:37:11.983562890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.985615 containerd[1462]: time="2025-02-13T19:37:11.985540999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:37:11.985615 containerd[1462]: time="2025-02-13T19:37:11.985599539Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:37:11.985615 containerd[1462]: time="2025-02-13T19:37:11.985627040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:37:11.985895 containerd[1462]: time="2025-02-13T19:37:11.985864606Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:37:11.985956 containerd[1462]: time="2025-02-13T19:37:11.985893480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986016 containerd[1462]: time="2025-02-13T19:37:11.985991704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986037 containerd[1462]: time="2025-02-13T19:37:11.986013926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986307 containerd[1462]: time="2025-02-13T19:37:11.986280986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986307 containerd[1462]: time="2025-02-13T19:37:11.986304931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986352 containerd[1462]: time="2025-02-13T19:37:11.986322494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986352 containerd[1462]: time="2025-02-13T19:37:11.986337091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986481 containerd[1462]: time="2025-02-13T19:37:11.986458780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986789 containerd[1462]: time="2025-02-13T19:37:11.986758572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986944 containerd[1462]: time="2025-02-13T19:37:11.986903624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:37:11.986944 containerd[1462]: time="2025-02-13T19:37:11.986937207Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:37:11.987086 containerd[1462]: time="2025-02-13T19:37:11.987058153Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:37:11.987165 containerd[1462]: time="2025-02-13T19:37:11.987137943Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:37:12.053485 systemd-networkd[1391]: eth0: Gained IPv6LL Feb 13 19:37:12.057193 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:37:12.059081 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:37:12.076490 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:37:12.112476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:12.115075 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:37:12.133946 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:37:12.134267 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:37:12.135941 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:37:12.138304 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:37:12.172512 containerd[1462]: time="2025-02-13T19:37:12.172457136Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:37:12.172587 containerd[1462]: time="2025-02-13T19:37:12.172540793Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:37:12.172587 containerd[1462]: time="2025-02-13T19:37:12.172564818Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:37:12.172622 containerd[1462]: time="2025-02-13T19:37:12.172587140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:37:12.172622 containerd[1462]: time="2025-02-13T19:37:12.172606306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:37:12.172851 containerd[1462]: time="2025-02-13T19:37:12.172822492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:37:12.173129 containerd[1462]: time="2025-02-13T19:37:12.173106254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:37:12.173299 containerd[1462]: time="2025-02-13T19:37:12.173277645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:37:12.173321 containerd[1462]: time="2025-02-13T19:37:12.173300508Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:37:12.173339 containerd[1462]: time="2025-02-13T19:37:12.173317760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:37:12.173339 containerd[1462]: time="2025-02-13T19:37:12.173333620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173383 containerd[1462]: time="2025-02-13T19:37:12.173350562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173383 containerd[1462]: time="2025-02-13T19:37:12.173365089Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173417 containerd[1462]: time="2025-02-13T19:37:12.173381229Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173417 containerd[1462]: time="2025-02-13T19:37:12.173399894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173450 containerd[1462]: time="2025-02-13T19:37:12.173415113Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173450 containerd[1462]: time="2025-02-13T19:37:12.173429670Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173450 containerd[1462]: time="2025-02-13T19:37:12.173443035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:37:12.173504 containerd[1462]: time="2025-02-13T19:37:12.173470927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173504 containerd[1462]: time="2025-02-13T19:37:12.173489091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173544 containerd[1462]: time="2025-02-13T19:37:12.173504550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173544 containerd[1462]: time="2025-02-13T19:37:12.173519278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173544 containerd[1462]: time="2025-02-13T19:37:12.173535719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173599 containerd[1462]: time="2025-02-13T19:37:12.173551579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173599 containerd[1462]: time="2025-02-13T19:37:12.173566006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173633 containerd[1462]: time="2025-02-13T19:37:12.173603596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173633 containerd[1462]: time="2025-02-13T19:37:12.173621500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173672 containerd[1462]: time="2025-02-13T19:37:12.173649542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173672 containerd[1462]: time="2025-02-13T19:37:12.173666404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173707 containerd[1462]: time="2025-02-13T19:37:12.173683015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173707 containerd[1462]: time="2025-02-13T19:37:12.173698634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173741 containerd[1462]: time="2025-02-13T19:37:12.173722810Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:37:12.173759 containerd[1462]: time="2025-02-13T19:37:12.173748227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173778 containerd[1462]: time="2025-02-13T19:37:12.173763907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173801 containerd[1462]: time="2025-02-13T19:37:12.173778544Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:37:12.173850 containerd[1462]: time="2025-02-13T19:37:12.173837745Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:37:12.173883 containerd[1462]: time="2025-02-13T19:37:12.173863954Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:37:12.173883 containerd[1462]: time="2025-02-13T19:37:12.173877981Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:37:12.173945 containerd[1462]: time="2025-02-13T19:37:12.173895303Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:37:12.173945 containerd[1462]: time="2025-02-13T19:37:12.173916733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.173981 containerd[1462]: time="2025-02-13T19:37:12.173951238Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:37:12.174000 containerd[1462]: time="2025-02-13T19:37:12.173980172Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:37:12.174019 containerd[1462]: time="2025-02-13T19:37:12.174006932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:37:12.174446 containerd[1462]: time="2025-02-13T19:37:12.174372147Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:37:12.174446 containerd[1462]: time="2025-02-13T19:37:12.174441667Z" level=info msg="Connect containerd service" Feb 13 19:37:12.174699 containerd[1462]: time="2025-02-13T19:37:12.174480530Z" level=info msg="using legacy CRI server" Feb 13 19:37:12.174699 containerd[1462]: time="2025-02-13T19:37:12.174489427Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:37:12.175284 containerd[1462]: time="2025-02-13T19:37:12.175145457Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:37:12.176372 containerd[1462]: time="2025-02-13T19:37:12.176337843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:37:12.176585 containerd[1462]: time="2025-02-13T19:37:12.176507701Z" level=info msg="Start subscribing containerd event" Feb 13 19:37:12.176585 containerd[1462]: time="2025-02-13T19:37:12.176559268Z" level=info msg="Start recovering state" Feb 13 19:37:12.176739 containerd[1462]: time="2025-02-13T19:37:12.176620473Z" level=info msg="Start event monitor" Feb 13 19:37:12.176739 containerd[1462]: time="2025-02-13T19:37:12.176641482Z" level=info msg="Start snapshots syncer" Feb 13 19:37:12.176739 containerd[1462]: time="2025-02-13T19:37:12.176649848Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:37:12.176739 containerd[1462]: time="2025-02-13T19:37:12.176656891Z" level=info msg="Start streaming server" Feb 13 19:37:12.176739 containerd[1462]: time="2025-02-13T19:37:12.176715621Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:37:12.176886 containerd[1462]: time="2025-02-13T19:37:12.176776615Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:37:12.176886 containerd[1462]: time="2025-02-13T19:37:12.176841157Z" level=info msg="containerd successfully booted in 0.219162s" Feb 13 19:37:12.176972 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:37:13.070544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:13.072381 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:37:13.073699 systemd[1]: Startup finished in 717ms (kernel) + 6.327s (initrd) + 4.591s (userspace) = 11.635s. Feb 13 19:37:13.086009 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:13.614081 kubelet[1553]: E0213 19:37:13.613956 1553 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:13.618258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:13.618508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:13.618880 systemd[1]: kubelet.service: Consumed 1.340s CPU time. Feb 13 19:37:21.882160 systemd[1]: Started sshd@1-10.0.0.63:22-10.0.0.1:33420.service - OpenSSH per-connection server daemon (10.0.0.1:33420). Feb 13 19:37:21.924023 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 33420 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:21.925573 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:21.934336 systemd-logind[1445]: New session 1 of user core. Feb 13 19:37:21.935612 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:37:21.943553 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:37:21.954880 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:37:21.957458 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:37:21.965234 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:37:22.063323 systemd[1571]: Queued start job for default target default.target. Feb 13 19:37:22.073459 systemd[1571]: Created slice app.slice - User Application Slice. Feb 13 19:37:22.073483 systemd[1571]: Reached target paths.target - Paths. Feb 13 19:37:22.073495 systemd[1571]: Reached target timers.target - Timers. Feb 13 19:37:22.074954 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:37:22.085728 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:37:22.085839 systemd[1571]: Reached target sockets.target - Sockets. Feb 13 19:37:22.085857 systemd[1571]: Reached target basic.target - Basic System. Feb 13 19:37:22.085891 systemd[1571]: Reached target default.target - Main User Target. Feb 13 19:37:22.085921 systemd[1571]: Startup finished in 114ms. Feb 13 19:37:22.086342 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:37:22.087868 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:37:22.148808 systemd[1]: Started sshd@2-10.0.0.63:22-10.0.0.1:33422.service - OpenSSH per-connection server daemon (10.0.0.1:33422). Feb 13 19:37:22.196214 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 33422 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.197603 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.201507 systemd-logind[1445]: New session 2 of user core. Feb 13 19:37:22.211368 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:37:22.263763 sshd[1584]: Connection closed by 10.0.0.1 port 33422 Feb 13 19:37:22.264060 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:22.274840 systemd[1]: sshd@2-10.0.0.63:22-10.0.0.1:33422.service: Deactivated successfully. Feb 13 19:37:22.276575 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:37:22.278106 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:37:22.279328 systemd[1]: Started sshd@3-10.0.0.63:22-10.0.0.1:33426.service - OpenSSH per-connection server daemon (10.0.0.1:33426). Feb 13 19:37:22.280007 systemd-logind[1445]: Removed session 2. Feb 13 19:37:22.328517 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 33426 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.329796 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.333402 systemd-logind[1445]: New session 3 of user core. Feb 13 19:37:22.349356 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:37:22.397496 sshd[1591]: Connection closed by 10.0.0.1 port 33426 Feb 13 19:37:22.397833 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:22.411833 systemd[1]: sshd@3-10.0.0.63:22-10.0.0.1:33426.service: Deactivated successfully. Feb 13 19:37:22.413514 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:37:22.414950 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:37:22.416124 systemd[1]: Started sshd@4-10.0.0.63:22-10.0.0.1:33432.service - OpenSSH per-connection server daemon (10.0.0.1:33432). Feb 13 19:37:22.416770 systemd-logind[1445]: Removed session 3. Feb 13 19:37:22.457438 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 33432 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.458810 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.462073 systemd-logind[1445]: New session 4 of user core. Feb 13 19:37:22.470352 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:37:22.522635 sshd[1598]: Connection closed by 10.0.0.1 port 33432 Feb 13 19:37:22.522938 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:22.535981 systemd[1]: sshd@4-10.0.0.63:22-10.0.0.1:33432.service: Deactivated successfully. Feb 13 19:37:22.537666 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:37:22.539020 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:37:22.556470 systemd[1]: Started sshd@5-10.0.0.63:22-10.0.0.1:33444.service - OpenSSH per-connection server daemon (10.0.0.1:33444). Feb 13 19:37:22.557391 systemd-logind[1445]: Removed session 4. Feb 13 19:37:22.594021 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 33444 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.595439 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.598919 systemd-logind[1445]: New session 5 of user core. Feb 13 19:37:22.606374 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:37:22.662868 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:37:22.663221 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:22.679136 sudo[1606]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:22.680470 sshd[1605]: Connection closed by 10.0.0.1 port 33444 Feb 13 19:37:22.680850 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:22.692840 systemd[1]: sshd@5-10.0.0.63:22-10.0.0.1:33444.service: Deactivated successfully. Feb 13 19:37:22.694352 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:37:22.695580 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:37:22.704475 systemd[1]: Started sshd@6-10.0.0.63:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Feb 13 19:37:22.705389 systemd-logind[1445]: Removed session 5. Feb 13 19:37:22.741960 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.743372 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.746644 systemd-logind[1445]: New session 6 of user core. Feb 13 19:37:22.756348 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:37:22.808681 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:37:22.809003 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:22.812169 sudo[1615]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:22.817527 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:37:22.817848 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:22.838522 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:37:22.866586 augenrules[1637]: No rules Feb 13 19:37:22.868285 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:37:22.868513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:37:22.869746 sudo[1614]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:22.871210 sshd[1613]: Connection closed by 10.0.0.1 port 33454 Feb 13 19:37:22.871546 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:22.881878 systemd[1]: sshd@6-10.0.0.63:22-10.0.0.1:33454.service: Deactivated successfully. Feb 13 19:37:22.883432 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:37:22.884767 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:37:22.885904 systemd[1]: Started sshd@7-10.0.0.63:22-10.0.0.1:33470.service - OpenSSH per-connection server daemon (10.0.0.1:33470). Feb 13 19:37:22.886717 systemd-logind[1445]: Removed session 6. Feb 13 19:37:22.927775 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 33470 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:37:22.929135 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:37:22.932676 systemd-logind[1445]: New session 7 of user core. Feb 13 19:37:22.942378 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:37:22.994063 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:37:22.994403 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:37:23.255442 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:37:23.255641 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:37:23.492962 dockerd[1668]: time="2025-02-13T19:37:23.492899272Z" level=info msg="Starting up" Feb 13 19:37:23.589259 dockerd[1668]: time="2025-02-13T19:37:23.589145174Z" level=info msg="Loading containers: start." Feb 13 19:37:23.689732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:37:23.698439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:23.870054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:23.875732 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:23.937459 kubelet[1774]: E0213 19:37:23.937386 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:23.945788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:23.946006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:23.995269 kernel: Initializing XFRM netlink socket Feb 13 19:37:24.072698 systemd-networkd[1391]: docker0: Link UP Feb 13 19:37:24.120814 dockerd[1668]: time="2025-02-13T19:37:24.120710619Z" level=info msg="Loading containers: done." Feb 13 19:37:24.134681 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1824340716-merged.mount: Deactivated successfully. Feb 13 19:37:24.138071 dockerd[1668]: time="2025-02-13T19:37:24.138027217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:37:24.138164 dockerd[1668]: time="2025-02-13T19:37:24.138140920Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:37:24.138309 dockerd[1668]: time="2025-02-13T19:37:24.138287205Z" level=info msg="Daemon has completed initialization" Feb 13 19:37:24.175636 dockerd[1668]: time="2025-02-13T19:37:24.175560232Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:37:24.175727 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:37:24.971144 containerd[1462]: time="2025-02-13T19:37:24.971080802Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:37:25.640609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888306517.mount: Deactivated successfully. Feb 13 19:37:27.013123 containerd[1462]: time="2025-02-13T19:37:27.013050074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.028471 containerd[1462]: time="2025-02-13T19:37:27.028360341Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:37:27.030235 containerd[1462]: time="2025-02-13T19:37:27.030191865Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.033320 containerd[1462]: time="2025-02-13T19:37:27.033262062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:27.034405 containerd[1462]: time="2025-02-13T19:37:27.034363677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.063231518s" Feb 13 19:37:27.034469 containerd[1462]: time="2025-02-13T19:37:27.034404454Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:37:27.062704 containerd[1462]: time="2025-02-13T19:37:27.062656516Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:37:29.144338 containerd[1462]: time="2025-02-13T19:37:29.144271634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.145251 containerd[1462]: time="2025-02-13T19:37:29.145200295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:37:29.146791 containerd[1462]: time="2025-02-13T19:37:29.146749180Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.150090 containerd[1462]: time="2025-02-13T19:37:29.150043837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:29.150929 containerd[1462]: time="2025-02-13T19:37:29.150897608Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.088195566s" Feb 13 19:37:29.150986 containerd[1462]: time="2025-02-13T19:37:29.150930429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:37:29.175654 containerd[1462]: time="2025-02-13T19:37:29.175598282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:37:30.125526 containerd[1462]: time="2025-02-13T19:37:30.125466604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:30.126331 containerd[1462]: time="2025-02-13T19:37:30.126277254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:37:30.127721 containerd[1462]: time="2025-02-13T19:37:30.127685816Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:30.130965 containerd[1462]: time="2025-02-13T19:37:30.130929167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:30.132990 containerd[1462]: time="2025-02-13T19:37:30.132960415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 957.323832ms" Feb 13 19:37:30.133055 containerd[1462]: time="2025-02-13T19:37:30.132989931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:37:30.153031 containerd[1462]: time="2025-02-13T19:37:30.152964794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:37:31.641528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781896971.mount: Deactivated successfully. Feb 13 19:37:32.257152 containerd[1462]: time="2025-02-13T19:37:32.257089998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:32.257885 containerd[1462]: time="2025-02-13T19:37:32.257849452Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:37:32.259088 containerd[1462]: time="2025-02-13T19:37:32.259020778Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:32.260895 containerd[1462]: time="2025-02-13T19:37:32.260862181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:32.261498 containerd[1462]: time="2025-02-13T19:37:32.261468588Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.108467456s" Feb 13 19:37:32.261540 containerd[1462]: time="2025-02-13T19:37:32.261498093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:37:32.281848 containerd[1462]: time="2025-02-13T19:37:32.281812703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:37:33.153439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620472682.mount: Deactivated successfully. Feb 13 19:37:34.140023 containerd[1462]: time="2025-02-13T19:37:34.139970478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.140704 containerd[1462]: time="2025-02-13T19:37:34.140623513Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:37:34.141943 containerd[1462]: time="2025-02-13T19:37:34.141864520Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.144840 containerd[1462]: time="2025-02-13T19:37:34.144796617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.146125 containerd[1462]: time="2025-02-13T19:37:34.146097406Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.864106359s" Feb 13 19:37:34.146125 containerd[1462]: time="2025-02-13T19:37:34.146124437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:37:34.170228 containerd[1462]: time="2025-02-13T19:37:34.170183307Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:37:34.189771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:37:34.201402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:34.348661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:34.354494 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:37:34.392210 kubelet[2042]: E0213 19:37:34.392000 2042 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:37:34.396404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:37:34.396623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:37:34.941294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595322161.mount: Deactivated successfully. Feb 13 19:37:34.947743 containerd[1462]: time="2025-02-13T19:37:34.947685343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.948601 containerd[1462]: time="2025-02-13T19:37:34.948570523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:37:34.949800 containerd[1462]: time="2025-02-13T19:37:34.949769221Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.952332 containerd[1462]: time="2025-02-13T19:37:34.952297421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:34.953069 containerd[1462]: time="2025-02-13T19:37:34.953032399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 782.81076ms" Feb 13 19:37:34.953108 containerd[1462]: time="2025-02-13T19:37:34.953075209Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:37:34.974377 containerd[1462]: time="2025-02-13T19:37:34.974320604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:37:35.609021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24062744.mount: Deactivated successfully. Feb 13 19:37:38.101993 containerd[1462]: time="2025-02-13T19:37:38.101900105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:38.102939 containerd[1462]: time="2025-02-13T19:37:38.102886044Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:37:38.104328 containerd[1462]: time="2025-02-13T19:37:38.104288383Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:38.107513 containerd[1462]: time="2025-02-13T19:37:38.107461513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:37:38.108736 containerd[1462]: time="2025-02-13T19:37:38.108701458Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.134324829s" Feb 13 19:37:38.108736 containerd[1462]: time="2025-02-13T19:37:38.108730793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:37:40.545159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:40.555448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:40.571168 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-7.scope)... Feb 13 19:37:40.571184 systemd[1]: Reloading... Feb 13 19:37:40.657263 zram_generator::config[2234]: No configuration found. Feb 13 19:37:40.931659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:37:41.010456 systemd[1]: Reloading finished in 438 ms. Feb 13 19:37:41.061990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:41.065695 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:41.068268 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:37:41.068600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:41.080589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:41.222271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:41.226687 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:37:41.266618 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:41.266618 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:37:41.266618 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:41.267508 kubelet[2281]: I0213 19:37:41.267466 2281 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:37:41.662233 kubelet[2281]: I0213 19:37:41.662115 2281 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:37:41.662233 kubelet[2281]: I0213 19:37:41.662151 2281 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:37:41.663983 kubelet[2281]: I0213 19:37:41.662649 2281 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:37:41.676208 kubelet[2281]: I0213 19:37:41.676156 2281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:37:41.676693 kubelet[2281]: E0213 19:37:41.676649 2281 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.63:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.688968 kubelet[2281]: I0213 19:37:41.688933 2281 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:37:41.690250 kubelet[2281]: I0213 19:37:41.690198 2281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:37:41.690478 kubelet[2281]: I0213 19:37:41.690257 2281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:37:41.690985 kubelet[2281]: I0213 19:37:41.690959 2281 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:37:41.690985 kubelet[2281]: I0213 19:37:41.690983 2281 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:37:41.691181 kubelet[2281]: I0213 19:37:41.691157 2281 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:41.691890 kubelet[2281]: I0213 19:37:41.691867 2281 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:37:41.691890 kubelet[2281]: I0213 19:37:41.691888 2281 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:37:41.691948 kubelet[2281]: I0213 19:37:41.691914 2281 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:37:41.691948 kubelet[2281]: I0213 19:37:41.691937 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:37:41.694455 kubelet[2281]: W0213 19:37:41.694322 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.694455 kubelet[2281]: E0213 19:37:41.694377 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.694455 kubelet[2281]: W0213 19:37:41.694371 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.694455 kubelet[2281]: E0213 19:37:41.694417 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.696403 kubelet[2281]: I0213 19:37:41.696383 2281 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:37:41.697594 kubelet[2281]: I0213 19:37:41.697563 2281 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:37:41.697666 kubelet[2281]: W0213 19:37:41.697619 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:37:41.698312 kubelet[2281]: I0213 19:37:41.698292 2281 server.go:1264] "Started kubelet" Feb 13 19:37:41.699352 kubelet[2281]: I0213 19:37:41.699159 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:37:41.700023 kubelet[2281]: I0213 19:37:41.699989 2281 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:37:41.700091 kubelet[2281]: I0213 19:37:41.700037 2281 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:37:41.700091 kubelet[2281]: I0213 19:37:41.700080 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:37:41.703281 kubelet[2281]: I0213 19:37:41.703235 2281 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:37:41.704202 kubelet[2281]: I0213 19:37:41.704018 2281 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:37:41.704398 kubelet[2281]: I0213 19:37:41.704367 2281 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:37:41.704650 kubelet[2281]: I0213 19:37:41.704623 2281 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:37:41.704715 kubelet[2281]: E0213 19:37:41.704612 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dbb75e08b31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:37:41.698269978 +0000 UTC m=+0.467323964,LastTimestamp:2025-02-13 19:37:41.698269978 +0000 UTC m=+0.467323964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:37:41.706023 kubelet[2281]: W0213 19:37:41.705235 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.706023 kubelet[2281]: E0213 19:37:41.705309 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.706023 kubelet[2281]: E0213 19:37:41.705757 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="200ms" Feb 13 19:37:41.706379 kubelet[2281]: I0213 19:37:41.706354 2281 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:37:41.706468 kubelet[2281]: I0213 19:37:41.706438 2281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:37:41.707164 kubelet[2281]: E0213 19:37:41.707137 2281 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:37:41.709540 kubelet[2281]: I0213 19:37:41.709516 2281 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:37:41.721582 kubelet[2281]: I0213 19:37:41.720788 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:37:41.722004 kubelet[2281]: I0213 19:37:41.721986 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:37:41.722054 kubelet[2281]: I0213 19:37:41.722006 2281 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:37:41.722054 kubelet[2281]: I0213 19:37:41.722028 2281 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:37:41.722112 kubelet[2281]: E0213 19:37:41.722065 2281 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:37:41.722679 kubelet[2281]: W0213 19:37:41.722648 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.722724 kubelet[2281]: E0213 19:37:41.722683 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:41.725089 kubelet[2281]: I0213 19:37:41.725052 2281 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:37:41.725089 kubelet[2281]: I0213 19:37:41.725064 2281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:37:41.725089 kubelet[2281]: I0213 19:37:41.725089 2281 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:41.806015 kubelet[2281]: I0213 19:37:41.805979 2281 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:41.806348 kubelet[2281]: E0213 19:37:41.806307 2281 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Feb 13 19:37:41.822482 kubelet[2281]: E0213 19:37:41.822430 2281 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:37:41.906197 kubelet[2281]: E0213 19:37:41.906140 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="400ms" Feb 13 19:37:41.956858 kubelet[2281]: I0213 19:37:41.956831 2281 policy_none.go:49] "None policy: Start" Feb 13 19:37:41.957539 kubelet[2281]: I0213 19:37:41.957513 2281 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:37:41.957585 kubelet[2281]: I0213 19:37:41.957568 2281 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:37:41.963352 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:37:41.976877 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:37:41.979682 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:37:41.987101 kubelet[2281]: I0213 19:37:41.987043 2281 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:37:41.987322 kubelet[2281]: I0213 19:37:41.987282 2281 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:37:41.987674 kubelet[2281]: I0213 19:37:41.987430 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:37:41.988236 kubelet[2281]: E0213 19:37:41.988202 2281 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:37:42.008428 kubelet[2281]: I0213 19:37:42.008407 2281 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:42.008752 kubelet[2281]: E0213 19:37:42.008730 2281 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Feb 13 19:37:42.022888 kubelet[2281]: I0213 19:37:42.022850 2281 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:37:42.023784 kubelet[2281]: I0213 19:37:42.023763 2281 topology_manager.go:215] "Topology Admit Handler" podUID="4d2c93958c651de9209a5ccc88e194fe" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:37:42.024501 kubelet[2281]: I0213 19:37:42.024472 2281 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:37:42.030460 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:37:42.043954 systemd[1]: Created slice kubepods-burstable-pod4d2c93958c651de9209a5ccc88e194fe.slice - libcontainer container kubepods-burstable-pod4d2c93958c651de9209a5ccc88e194fe.slice. Feb 13 19:37:42.061036 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:37:42.106909 kubelet[2281]: I0213 19:37:42.106881 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:42.107011 kubelet[2281]: I0213 19:37:42.106919 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:42.107011 kubelet[2281]: I0213 19:37:42.106952 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:37:42.107011 kubelet[2281]: I0213 19:37:42.106987 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:42.107124 kubelet[2281]: I0213 19:37:42.107014 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:42.107124 kubelet[2281]: I0213 19:37:42.107054 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:42.107124 kubelet[2281]: I0213 19:37:42.107094 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:42.107124 kubelet[2281]: I0213 19:37:42.107117 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:42.107265 kubelet[2281]: I0213 19:37:42.107144 2281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:42.307464 kubelet[2281]: E0213 19:37:42.307330 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="800ms" Feb 13 19:37:42.342657 kubelet[2281]: E0213 19:37:42.342627 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:42.343449 containerd[1462]: time="2025-02-13T19:37:42.343401179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:37:42.359746 kubelet[2281]: E0213 19:37:42.359712 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:42.360587 containerd[1462]: time="2025-02-13T19:37:42.360560833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d2c93958c651de9209a5ccc88e194fe,Namespace:kube-system,Attempt:0,}" Feb 13 19:37:42.363899 kubelet[2281]: E0213 19:37:42.363879 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:42.364262 containerd[1462]: time="2025-02-13T19:37:42.364209123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:37:42.410849 kubelet[2281]: I0213 19:37:42.410798 2281 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:42.411112 kubelet[2281]: E0213 19:37:42.411084 2281 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Feb 13 19:37:42.669613 kubelet[2281]: E0213 19:37:42.669445 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.63:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.63:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dbb75e08b31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:37:41.698269978 +0000 UTC m=+0.467323964,LastTimestamp:2025-02-13 19:37:41.698269978 +0000 UTC m=+0.467323964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:37:42.732160 kubelet[2281]: W0213 19:37:42.732121 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:42.732160 kubelet[2281]: E0213 19:37:42.732162 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.63:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:42.852963 kubelet[2281]: W0213 19:37:42.852864 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:42.852963 kubelet[2281]: E0213 19:37:42.852953 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.63:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:42.891144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951145492.mount: Deactivated successfully. Feb 13 19:37:42.898258 containerd[1462]: time="2025-02-13T19:37:42.898200237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:42.901281 containerd[1462]: time="2025-02-13T19:37:42.901228084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:37:42.902307 containerd[1462]: time="2025-02-13T19:37:42.902278554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:43.003015 kubelet[2281]: W0213 19:37:43.002943 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:43.003015 kubelet[2281]: E0213 19:37:43.003016 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.63:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:43.042085 containerd[1462]: time="2025-02-13T19:37:43.042051571Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:43.043233 containerd[1462]: time="2025-02-13T19:37:43.043192569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:37:43.046283 containerd[1462]: time="2025-02-13T19:37:43.046255771Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:43.047317 containerd[1462]: time="2025-02-13T19:37:43.047228667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:37:43.048839 containerd[1462]: time="2025-02-13T19:37:43.048786418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:37:43.050686 containerd[1462]: time="2025-02-13T19:37:43.050645310Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 686.353441ms" Feb 13 19:37:43.051255 containerd[1462]: time="2025-02-13T19:37:43.051219847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 690.583773ms" Feb 13 19:37:43.051906 containerd[1462]: time="2025-02-13T19:37:43.051877845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 708.371538ms" Feb 13 19:37:43.108586 kubelet[2281]: E0213 19:37:43.108504 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.63:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.63:6443: connect: connection refused" interval="1.6s" Feb 13 19:37:43.162460 containerd[1462]: time="2025-02-13T19:37:43.162314645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:43.162785 containerd[1462]: time="2025-02-13T19:37:43.162473080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:43.162785 containerd[1462]: time="2025-02-13T19:37:43.162535049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.162785 containerd[1462]: time="2025-02-13T19:37:43.162658297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.165014 containerd[1462]: time="2025-02-13T19:37:43.164412226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:43.165014 containerd[1462]: time="2025-02-13T19:37:43.164470438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:43.165014 containerd[1462]: time="2025-02-13T19:37:43.164483284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.165014 containerd[1462]: time="2025-02-13T19:37:43.164554440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.168577 containerd[1462]: time="2025-02-13T19:37:43.168503659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:37:43.168642 containerd[1462]: time="2025-02-13T19:37:43.168606919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:37:43.168665 containerd[1462]: time="2025-02-13T19:37:43.168639612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.168843 containerd[1462]: time="2025-02-13T19:37:43.168789771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:37:43.184406 systemd[1]: Started cri-containerd-24acf3df8372a3fc5c80eefb9840c9fadb45ddbcbc5a6689c7390234592dc8f5.scope - libcontainer container 24acf3df8372a3fc5c80eefb9840c9fadb45ddbcbc5a6689c7390234592dc8f5. Feb 13 19:37:43.188488 systemd[1]: Started cri-containerd-44a7a764ae671fd6aa38c2b65683c8f6d6a19b5cdf815d2df4efe2f4fdd5f7f2.scope - libcontainer container 44a7a764ae671fd6aa38c2b65683c8f6d6a19b5cdf815d2df4efe2f4fdd5f7f2. Feb 13 19:37:43.192953 systemd[1]: Started cri-containerd-88fb0ca277133fcdb491838914b169754712814f402d9a159aea21c0e252b9ec.scope - libcontainer container 88fb0ca277133fcdb491838914b169754712814f402d9a159aea21c0e252b9ec. Feb 13 19:37:43.213509 kubelet[2281]: I0213 19:37:43.213462 2281 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:43.213856 kubelet[2281]: E0213 19:37:43.213824 2281 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.63:6443/api/v1/nodes\": dial tcp 10.0.0.63:6443: connect: connection refused" node="localhost" Feb 13 19:37:43.230752 containerd[1462]: time="2025-02-13T19:37:43.230696018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4d2c93958c651de9209a5ccc88e194fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"24acf3df8372a3fc5c80eefb9840c9fadb45ddbcbc5a6689c7390234592dc8f5\"" Feb 13 19:37:43.232104 kubelet[2281]: E0213 19:37:43.232071 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:43.233103 containerd[1462]: time="2025-02-13T19:37:43.233074510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"44a7a764ae671fd6aa38c2b65683c8f6d6a19b5cdf815d2df4efe2f4fdd5f7f2\"" Feb 13 19:37:43.234750 kubelet[2281]: E0213 19:37:43.234723 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:43.237565 containerd[1462]: time="2025-02-13T19:37:43.237516249Z" level=info msg="CreateContainer within sandbox \"44a7a764ae671fd6aa38c2b65683c8f6d6a19b5cdf815d2df4efe2f4fdd5f7f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:37:43.237686 containerd[1462]: time="2025-02-13T19:37:43.237523013Z" level=info msg="CreateContainer within sandbox \"24acf3df8372a3fc5c80eefb9840c9fadb45ddbcbc5a6689c7390234592dc8f5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:37:43.237926 kubelet[2281]: W0213 19:37:43.237836 2281 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:43.237926 kubelet[2281]: E0213 19:37:43.237878 2281 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.63:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.63:6443: connect: connection refused Feb 13 19:37:43.240657 containerd[1462]: time="2025-02-13T19:37:43.240500409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"88fb0ca277133fcdb491838914b169754712814f402d9a159aea21c0e252b9ec\"" Feb 13 19:37:43.242293 kubelet[2281]: E0213 19:37:43.242006 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:43.244973 containerd[1462]: time="2025-02-13T19:37:43.244943350Z" level=info msg="CreateContainer within sandbox \"88fb0ca277133fcdb491838914b169754712814f402d9a159aea21c0e252b9ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:37:43.265948 containerd[1462]: time="2025-02-13T19:37:43.265850045Z" level=info msg="CreateContainer within sandbox \"24acf3df8372a3fc5c80eefb9840c9fadb45ddbcbc5a6689c7390234592dc8f5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4d11472387dbadb8926505275864a3703cf38199c0a4b92e16424a45d665460\"" Feb 13 19:37:43.266993 containerd[1462]: time="2025-02-13T19:37:43.266960024Z" level=info msg="StartContainer for \"f4d11472387dbadb8926505275864a3703cf38199c0a4b92e16424a45d665460\"" Feb 13 19:37:43.272733 containerd[1462]: time="2025-02-13T19:37:43.272686379Z" level=info msg="CreateContainer within sandbox \"44a7a764ae671fd6aa38c2b65683c8f6d6a19b5cdf815d2df4efe2f4fdd5f7f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ecb443549b51de3abdae9a3e1b19fec00106c814a3a517e47ed4914278fbc80\"" Feb 13 19:37:43.273270 containerd[1462]: time="2025-02-13T19:37:43.273227451Z" level=info msg="StartContainer for \"6ecb443549b51de3abdae9a3e1b19fec00106c814a3a517e47ed4914278fbc80\"" Feb 13 19:37:43.277758 containerd[1462]: time="2025-02-13T19:37:43.277713605Z" level=info msg="CreateContainer within sandbox \"88fb0ca277133fcdb491838914b169754712814f402d9a159aea21c0e252b9ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eda69b39b283698fa803d05d1148be7883361cf08a67a4e371b56fa4bcfd9a31\"" Feb 13 19:37:43.278429 containerd[1462]: time="2025-02-13T19:37:43.278219690Z" level=info msg="StartContainer for \"eda69b39b283698fa803d05d1148be7883361cf08a67a4e371b56fa4bcfd9a31\"" Feb 13 19:37:43.302493 systemd[1]: Started cri-containerd-f4d11472387dbadb8926505275864a3703cf38199c0a4b92e16424a45d665460.scope - libcontainer container f4d11472387dbadb8926505275864a3703cf38199c0a4b92e16424a45d665460. Feb 13 19:37:43.306736 systemd[1]: Started cri-containerd-6ecb443549b51de3abdae9a3e1b19fec00106c814a3a517e47ed4914278fbc80.scope - libcontainer container 6ecb443549b51de3abdae9a3e1b19fec00106c814a3a517e47ed4914278fbc80. Feb 13 19:37:43.321398 systemd[1]: Started cri-containerd-eda69b39b283698fa803d05d1148be7883361cf08a67a4e371b56fa4bcfd9a31.scope - libcontainer container eda69b39b283698fa803d05d1148be7883361cf08a67a4e371b56fa4bcfd9a31. Feb 13 19:37:43.483365 containerd[1462]: time="2025-02-13T19:37:43.483287293Z" level=info msg="StartContainer for \"f4d11472387dbadb8926505275864a3703cf38199c0a4b92e16424a45d665460\" returns successfully" Feb 13 19:37:43.483365 containerd[1462]: time="2025-02-13T19:37:43.483353110Z" level=info msg="StartContainer for \"eda69b39b283698fa803d05d1148be7883361cf08a67a4e371b56fa4bcfd9a31\" returns successfully" Feb 13 19:37:43.483869 containerd[1462]: time="2025-02-13T19:37:43.483323122Z" level=info msg="StartContainer for \"6ecb443549b51de3abdae9a3e1b19fec00106c814a3a517e47ed4914278fbc80\" returns successfully" Feb 13 19:37:43.735341 kubelet[2281]: E0213 19:37:43.735273 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:43.736207 kubelet[2281]: E0213 19:37:43.735905 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:43.737293 kubelet[2281]: E0213 19:37:43.737281 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:44.742253 kubelet[2281]: E0213 19:37:44.740115 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:44.815025 kubelet[2281]: I0213 19:37:44.814969 2281 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:44.866348 kubelet[2281]: E0213 19:37:44.866309 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:37:45.056798 kubelet[2281]: I0213 19:37:45.056026 2281 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:37:45.063621 kubelet[2281]: E0213 19:37:45.063583 2281 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:37:45.164482 kubelet[2281]: E0213 19:37:45.164438 2281 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:37:45.265490 kubelet[2281]: E0213 19:37:45.265452 2281 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:37:45.366149 kubelet[2281]: E0213 19:37:45.366027 2281 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:37:45.697366 kubelet[2281]: I0213 19:37:45.697324 2281 apiserver.go:52] "Watching apiserver" Feb 13 19:37:45.704645 kubelet[2281]: I0213 19:37:45.704609 2281 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:37:45.743162 kubelet[2281]: E0213 19:37:45.743120 2281 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:45.743584 kubelet[2281]: E0213 19:37:45.743547 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:45.983339 kubelet[2281]: E0213 19:37:45.983152 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:46.738956 kubelet[2281]: E0213 19:37:46.738893 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:47.167369 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-7.scope)... Feb 13 19:37:47.167389 systemd[1]: Reloading... Feb 13 19:37:47.253311 zram_generator::config[2600]: No configuration found. Feb 13 19:37:47.365314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:37:47.455962 systemd[1]: Reloading finished in 288 ms. Feb 13 19:37:47.500016 kubelet[2281]: I0213 19:37:47.499975 2281 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:37:47.500182 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:47.526750 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:37:47.527057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:47.537458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:37:47.681705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:37:47.686917 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:37:47.737313 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:47.737313 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:37:47.737313 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:37:47.737313 kubelet[2645]: I0213 19:37:47.737285 2645 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:37:47.741980 kubelet[2645]: I0213 19:37:47.741953 2645 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:37:47.741980 kubelet[2645]: I0213 19:37:47.741972 2645 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:37:47.742140 kubelet[2645]: I0213 19:37:47.742111 2645 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:37:47.743317 kubelet[2645]: I0213 19:37:47.743288 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:37:47.744797 kubelet[2645]: I0213 19:37:47.744378 2645 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:37:47.755972 kubelet[2645]: I0213 19:37:47.755937 2645 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:37:47.756219 kubelet[2645]: I0213 19:37:47.756170 2645 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:37:47.756440 kubelet[2645]: I0213 19:37:47.756217 2645 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:37:47.756524 kubelet[2645]: I0213 19:37:47.756463 2645 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:37:47.756524 kubelet[2645]: I0213 19:37:47.756473 2645 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:37:47.756524 kubelet[2645]: I0213 19:37:47.756518 2645 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:47.756626 kubelet[2645]: I0213 19:37:47.756615 2645 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:37:47.756651 kubelet[2645]: I0213 19:37:47.756629 2645 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:37:47.756651 kubelet[2645]: I0213 19:37:47.756650 2645 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:37:47.756707 kubelet[2645]: I0213 19:37:47.756668 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.757504 2645 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.757717 2645 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.758076 2645 server.go:1264] "Started kubelet" Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.758802 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.758803 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.759104 2645 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:37:47.760275 kubelet[2645]: I0213 19:37:47.760132 2645 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:37:47.761405 kubelet[2645]: I0213 19:37:47.761390 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:37:47.764019 kubelet[2645]: E0213 19:37:47.763973 2645 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:37:47.764075 kubelet[2645]: I0213 19:37:47.764039 2645 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:37:47.766300 kubelet[2645]: I0213 19:37:47.766268 2645 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:37:47.766527 kubelet[2645]: I0213 19:37:47.766501 2645 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:37:47.766999 kubelet[2645]: E0213 19:37:47.766968 2645 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:37:47.774062 kubelet[2645]: I0213 19:37:47.774019 2645 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:37:47.774062 kubelet[2645]: I0213 19:37:47.774045 2645 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:37:47.774234 kubelet[2645]: I0213 19:37:47.774128 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:37:47.774535 kubelet[2645]: I0213 19:37:47.774352 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:37:47.775883 kubelet[2645]: I0213 19:37:47.775852 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:37:47.775922 kubelet[2645]: I0213 19:37:47.775893 2645 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:37:47.775922 kubelet[2645]: I0213 19:37:47.775911 2645 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:37:47.775973 kubelet[2645]: E0213 19:37:47.775953 2645 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:37:47.808111 kubelet[2645]: I0213 19:37:47.808080 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:37:47.808111 kubelet[2645]: I0213 19:37:47.808097 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:37:47.808111 kubelet[2645]: I0213 19:37:47.808116 2645 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:37:47.808325 kubelet[2645]: I0213 19:37:47.808307 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:37:47.808362 kubelet[2645]: I0213 19:37:47.808321 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:37:47.808362 kubelet[2645]: I0213 19:37:47.808339 2645 policy_none.go:49] "None policy: Start" Feb 13 19:37:47.809212 kubelet[2645]: I0213 19:37:47.808971 2645 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:37:47.809212 kubelet[2645]: I0213 19:37:47.808991 2645 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:37:47.809212 kubelet[2645]: I0213 19:37:47.809134 2645 state_mem.go:75] "Updated machine memory state" Feb 13 19:37:47.813454 kubelet[2645]: I0213 19:37:47.813432 2645 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:37:47.813717 kubelet[2645]: I0213 19:37:47.813604 2645 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:37:47.813944 kubelet[2645]: I0213 19:37:47.813870 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:37:47.871253 kubelet[2645]: I0213 19:37:47.871196 2645 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:37:47.876410 kubelet[2645]: I0213 19:37:47.876367 2645 topology_manager.go:215] "Topology Admit Handler" podUID="4d2c93958c651de9209a5ccc88e194fe" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:37:47.876474 kubelet[2645]: I0213 19:37:47.876456 2645 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:37:47.876541 kubelet[2645]: I0213 19:37:47.876521 2645 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:37:47.910112 kubelet[2645]: E0213 19:37:47.910057 2645 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.911183 kubelet[2645]: I0213 19:37:47.911155 2645 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:37:47.911257 kubelet[2645]: I0213 19:37:47.911231 2645 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:37:47.967451 kubelet[2645]: I0213 19:37:47.967400 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.967451 kubelet[2645]: I0213 19:37:47.967441 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.967451 kubelet[2645]: I0213 19:37:47.967461 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:47.967642 kubelet[2645]: I0213 19:37:47.967478 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:47.967642 kubelet[2645]: I0213 19:37:47.967493 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.967642 kubelet[2645]: I0213 19:37:47.967507 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.967642 kubelet[2645]: I0213 19:37:47.967522 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4d2c93958c651de9209a5ccc88e194fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4d2c93958c651de9209a5ccc88e194fe\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:47.967642 kubelet[2645]: I0213 19:37:47.967536 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:47.967757 kubelet[2645]: I0213 19:37:47.967549 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:37:48.211537 kubelet[2645]: E0213 19:37:48.211470 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:48.211798 kubelet[2645]: E0213 19:37:48.211654 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:48.211798 kubelet[2645]: E0213 19:37:48.211758 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:48.246443 sudo[2680]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:37:48.246799 sudo[2680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:37:48.732512 sudo[2680]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:48.757616 kubelet[2645]: I0213 19:37:48.757569 2645 apiserver.go:52] "Watching apiserver" Feb 13 19:37:48.767499 kubelet[2645]: I0213 19:37:48.767328 2645 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:37:48.790881 kubelet[2645]: E0213 19:37:48.790377 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:48.959918 kubelet[2645]: E0213 19:37:48.959075 2645 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:37:48.959918 kubelet[2645]: E0213 19:37:48.959103 2645 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:37:48.959918 kubelet[2645]: E0213 19:37:48.959582 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:48.959918 kubelet[2645]: E0213 19:37:48.959668 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:49.006424 kubelet[2645]: I0213 19:37:49.006262 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.006230114 podStartE2EDuration="2.006230114s" podCreationTimestamp="2025-02-13 19:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:37:48.994549004 +0000 UTC m=+1.303120416" watchObservedRunningTime="2025-02-13 19:37:49.006230114 +0000 UTC m=+1.314801526" Feb 13 19:37:49.012576 kubelet[2645]: I0213 19:37:49.012429 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.012414257 podStartE2EDuration="2.012414257s" podCreationTimestamp="2025-02-13 19:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:37:49.006722424 +0000 UTC m=+1.315293846" watchObservedRunningTime="2025-02-13 19:37:49.012414257 +0000 UTC m=+1.320985669" Feb 13 19:37:49.019812 kubelet[2645]: I0213 19:37:49.019754 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.019737355 podStartE2EDuration="4.019737355s" podCreationTimestamp="2025-02-13 19:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:37:49.01311701 +0000 UTC m=+1.321688412" watchObservedRunningTime="2025-02-13 19:37:49.019737355 +0000 UTC m=+1.328308767" Feb 13 19:37:49.792495 kubelet[2645]: E0213 19:37:49.792458 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:49.792903 kubelet[2645]: E0213 19:37:49.792458 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:50.052496 sudo[1648]: pam_unix(sudo:session): session closed for user root Feb 13 19:37:50.054328 sshd[1647]: Connection closed by 10.0.0.1 port 33470 Feb 13 19:37:50.054774 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Feb 13 19:37:50.058543 systemd[1]: sshd@7-10.0.0.63:22-10.0.0.1:33470.service: Deactivated successfully. Feb 13 19:37:50.060274 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:37:50.060458 systemd[1]: session-7.scope: Consumed 4.571s CPU time, 188.7M memory peak, 0B memory swap peak. Feb 13 19:37:50.060957 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:37:50.061880 systemd-logind[1445]: Removed session 7. Feb 13 19:37:50.793779 kubelet[2645]: E0213 19:37:50.793743 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:50.976169 kubelet[2645]: E0213 19:37:50.976108 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:56.445174 update_engine[1447]: I20250213 19:37:56.445067 1447 update_attempter.cc:509] Updating boot flags... Feb 13 19:37:56.478436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2728) Feb 13 19:37:56.527277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2727) Feb 13 19:37:57.094023 kubelet[2645]: E0213 19:37:57.093983 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:37:57.802903 kubelet[2645]: E0213 19:37:57.802864 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:00.083575 kubelet[2645]: E0213 19:38:00.083542 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:00.805523 kubelet[2645]: E0213 19:38:00.805490 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:00.979960 kubelet[2645]: E0213 19:38:00.979913 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:01.618378 kubelet[2645]: I0213 19:38:01.618331 2645 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:38:01.618816 containerd[1462]: time="2025-02-13T19:38:01.618684630Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:38:01.619055 kubelet[2645]: I0213 19:38:01.618998 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:38:02.498233 kubelet[2645]: I0213 19:38:02.497520 2645 topology_manager.go:215] "Topology Admit Handler" podUID="f1152950-0b4b-4dd1-b677-fe912ec3424a" podNamespace="kube-system" podName="kube-proxy-hvxl5" Feb 13 19:38:02.505922 systemd[1]: Created slice kubepods-besteffort-podf1152950_0b4b_4dd1_b677_fe912ec3424a.slice - libcontainer container kubepods-besteffort-podf1152950_0b4b_4dd1_b677_fe912ec3424a.slice. Feb 13 19:38:02.507811 kubelet[2645]: I0213 19:38:02.507777 2645 topology_manager.go:215] "Topology Admit Handler" podUID="9e846631-8824-4c0e-9101-34901fd83c23" podNamespace="kube-system" podName="cilium-wpvx8" Feb 13 19:38:02.521988 systemd[1]: Created slice kubepods-burstable-pod9e846631_8824_4c0e_9101_34901fd83c23.slice - libcontainer container kubepods-burstable-pod9e846631_8824_4c0e_9101_34901fd83c23.slice. Feb 13 19:38:02.563371 kubelet[2645]: I0213 19:38:02.563324 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1152950-0b4b-4dd1-b677-fe912ec3424a-xtables-lock\") pod \"kube-proxy-hvxl5\" (UID: \"f1152950-0b4b-4dd1-b677-fe912ec3424a\") " pod="kube-system/kube-proxy-hvxl5" Feb 13 19:38:02.563371 kubelet[2645]: I0213 19:38:02.563365 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1152950-0b4b-4dd1-b677-fe912ec3424a-lib-modules\") pod \"kube-proxy-hvxl5\" (UID: \"f1152950-0b4b-4dd1-b677-fe912ec3424a\") " pod="kube-system/kube-proxy-hvxl5" Feb 13 19:38:02.563524 kubelet[2645]: I0213 19:38:02.563386 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-hubble-tls\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563524 kubelet[2645]: I0213 19:38:02.563444 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflz9\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-kube-api-access-dflz9\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563524 kubelet[2645]: I0213 19:38:02.563461 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fwvn\" (UniqueName: \"kubernetes.io/projected/f1152950-0b4b-4dd1-b677-fe912ec3424a-kube-api-access-6fwvn\") pod \"kube-proxy-hvxl5\" (UID: \"f1152950-0b4b-4dd1-b677-fe912ec3424a\") " pod="kube-system/kube-proxy-hvxl5" Feb 13 19:38:02.563524 kubelet[2645]: I0213 19:38:02.563476 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-hostproc\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563524 kubelet[2645]: I0213 19:38:02.563491 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cni-path\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563638 kubelet[2645]: I0213 19:38:02.563544 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e846631-8824-4c0e-9101-34901fd83c23-clustermesh-secrets\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563638 kubelet[2645]: I0213 19:38:02.563589 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-run\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563638 kubelet[2645]: I0213 19:38:02.563616 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-net\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563638 kubelet[2645]: I0213 19:38:02.563633 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-cgroup\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563728 kubelet[2645]: I0213 19:38:02.563655 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-xtables-lock\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563728 kubelet[2645]: I0213 19:38:02.563673 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-kernel\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563728 kubelet[2645]: I0213 19:38:02.563689 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-bpf-maps\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563728 kubelet[2645]: I0213 19:38:02.563708 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-etc-cni-netd\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563814 kubelet[2645]: I0213 19:38:02.563742 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1152950-0b4b-4dd1-b677-fe912ec3424a-kube-proxy\") pod \"kube-proxy-hvxl5\" (UID: \"f1152950-0b4b-4dd1-b677-fe912ec3424a\") " pod="kube-system/kube-proxy-hvxl5" Feb 13 19:38:02.563814 kubelet[2645]: I0213 19:38:02.563755 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-lib-modules\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.563814 kubelet[2645]: I0213 19:38:02.563770 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e846631-8824-4c0e-9101-34901fd83c23-cilium-config-path\") pod \"cilium-wpvx8\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " pod="kube-system/cilium-wpvx8" Feb 13 19:38:02.601619 kubelet[2645]: I0213 19:38:02.601571 2645 topology_manager.go:215] "Topology Admit Handler" podUID="c826f83e-a16b-4534-b0cd-145cf8365f0d" podNamespace="kube-system" podName="cilium-operator-599987898-7gx58" Feb 13 19:38:02.609143 systemd[1]: Created slice kubepods-besteffort-podc826f83e_a16b_4534_b0cd_145cf8365f0d.slice - libcontainer container kubepods-besteffort-podc826f83e_a16b_4534_b0cd_145cf8365f0d.slice. Feb 13 19:38:02.665097 kubelet[2645]: I0213 19:38:02.664614 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9tr\" (UniqueName: \"kubernetes.io/projected/c826f83e-a16b-4534-b0cd-145cf8365f0d-kube-api-access-vp9tr\") pod \"cilium-operator-599987898-7gx58\" (UID: \"c826f83e-a16b-4534-b0cd-145cf8365f0d\") " pod="kube-system/cilium-operator-599987898-7gx58" Feb 13 19:38:02.665097 kubelet[2645]: I0213 19:38:02.664680 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c826f83e-a16b-4534-b0cd-145cf8365f0d-cilium-config-path\") pod \"cilium-operator-599987898-7gx58\" (UID: \"c826f83e-a16b-4534-b0cd-145cf8365f0d\") " pod="kube-system/cilium-operator-599987898-7gx58" Feb 13 19:38:02.818326 kubelet[2645]: E0213 19:38:02.818209 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:02.818666 containerd[1462]: time="2025-02-13T19:38:02.818631527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvxl5,Uid:f1152950-0b4b-4dd1-b677-fe912ec3424a,Namespace:kube-system,Attempt:0,}" Feb 13 19:38:02.824875 kubelet[2645]: E0213 19:38:02.824820 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:02.825427 containerd[1462]: time="2025-02-13T19:38:02.825367059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpvx8,Uid:9e846631-8824-4c0e-9101-34901fd83c23,Namespace:kube-system,Attempt:0,}" Feb 13 19:38:02.851989 containerd[1462]: time="2025-02-13T19:38:02.851892498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:02.851989 containerd[1462]: time="2025-02-13T19:38:02.851943013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:02.851989 containerd[1462]: time="2025-02-13T19:38:02.851954606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:02.852163 containerd[1462]: time="2025-02-13T19:38:02.852029717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:02.854286 containerd[1462]: time="2025-02-13T19:38:02.853470160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:02.854286 containerd[1462]: time="2025-02-13T19:38:02.854087166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:02.854286 containerd[1462]: time="2025-02-13T19:38:02.854099950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:02.854286 containerd[1462]: time="2025-02-13T19:38:02.854175583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:02.872386 systemd[1]: Started cri-containerd-43567760de074c7fe32140602191ed623b2f8e69d7cb29307f1b4a23658d5de0.scope - libcontainer container 43567760de074c7fe32140602191ed623b2f8e69d7cb29307f1b4a23658d5de0. Feb 13 19:38:02.875123 systemd[1]: Started cri-containerd-30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f.scope - libcontainer container 30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f. Feb 13 19:38:02.897438 containerd[1462]: time="2025-02-13T19:38:02.897366614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvxl5,Uid:f1152950-0b4b-4dd1-b677-fe912ec3424a,Namespace:kube-system,Attempt:0,} returns sandbox id \"43567760de074c7fe32140602191ed623b2f8e69d7cb29307f1b4a23658d5de0\"" Feb 13 19:38:02.898386 kubelet[2645]: E0213 19:38:02.898098 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:02.900159 containerd[1462]: time="2025-02-13T19:38:02.899985674Z" level=info msg="CreateContainer within sandbox \"43567760de074c7fe32140602191ed623b2f8e69d7cb29307f1b4a23658d5de0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:38:02.901381 containerd[1462]: time="2025-02-13T19:38:02.901361385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpvx8,Uid:9e846631-8824-4c0e-9101-34901fd83c23,Namespace:kube-system,Attempt:0,} returns sandbox id \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\"" Feb 13 19:38:02.902827 kubelet[2645]: E0213 19:38:02.902798 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:02.904102 containerd[1462]: time="2025-02-13T19:38:02.904032514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:38:02.911611 kubelet[2645]: E0213 19:38:02.911579 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:02.911921 containerd[1462]: time="2025-02-13T19:38:02.911888544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7gx58,Uid:c826f83e-a16b-4534-b0cd-145cf8365f0d,Namespace:kube-system,Attempt:0,}" Feb 13 19:38:02.992334 containerd[1462]: time="2025-02-13T19:38:02.992283709Z" level=info msg="CreateContainer within sandbox \"43567760de074c7fe32140602191ed623b2f8e69d7cb29307f1b4a23658d5de0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00feb1d625a59b4bc7f58117f674586aa086f60dbf1afad67866479ad23bf32d\"" Feb 13 19:38:02.992900 containerd[1462]: time="2025-02-13T19:38:02.992875888Z" level=info msg="StartContainer for \"00feb1d625a59b4bc7f58117f674586aa086f60dbf1afad67866479ad23bf32d\"" Feb 13 19:38:03.009333 containerd[1462]: time="2025-02-13T19:38:03.009224603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:03.009333 containerd[1462]: time="2025-02-13T19:38:03.009296519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:03.009333 containerd[1462]: time="2025-02-13T19:38:03.009309534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:03.009581 containerd[1462]: time="2025-02-13T19:38:03.009396668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:03.021478 systemd[1]: Started cri-containerd-00feb1d625a59b4bc7f58117f674586aa086f60dbf1afad67866479ad23bf32d.scope - libcontainer container 00feb1d625a59b4bc7f58117f674586aa086f60dbf1afad67866479ad23bf32d. Feb 13 19:38:03.025712 systemd[1]: Started cri-containerd-ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd.scope - libcontainer container ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd. Feb 13 19:38:03.060794 containerd[1462]: time="2025-02-13T19:38:03.060752824Z" level=info msg="StartContainer for \"00feb1d625a59b4bc7f58117f674586aa086f60dbf1afad67866479ad23bf32d\" returns successfully" Feb 13 19:38:03.065341 containerd[1462]: time="2025-02-13T19:38:03.065304443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7gx58,Uid:c826f83e-a16b-4534-b0cd-145cf8365f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\"" Feb 13 19:38:03.065983 kubelet[2645]: E0213 19:38:03.065955 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:03.812505 kubelet[2645]: E0213 19:38:03.812467 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:03.822072 kubelet[2645]: I0213 19:38:03.821984 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hvxl5" podStartSLOduration=1.821963158 podStartE2EDuration="1.821963158s" podCreationTimestamp="2025-02-13 19:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:38:03.821677368 +0000 UTC m=+16.130248780" watchObservedRunningTime="2025-02-13 19:38:03.821963158 +0000 UTC m=+16.130534560" Feb 13 19:38:12.751417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897775455.mount: Deactivated successfully. Feb 13 19:38:15.000946 containerd[1462]: time="2025-02-13T19:38:15.000892317Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:15.001831 containerd[1462]: time="2025-02-13T19:38:15.001795527Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:38:15.003074 containerd[1462]: time="2025-02-13T19:38:15.003045839Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:15.004451 containerd[1462]: time="2025-02-13T19:38:15.004420747Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.100332536s" Feb 13 19:38:15.004523 containerd[1462]: time="2025-02-13T19:38:15.004455151Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:38:15.005394 containerd[1462]: time="2025-02-13T19:38:15.005365364Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:38:15.007929 containerd[1462]: time="2025-02-13T19:38:15.007893711Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:38:15.024095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388434848.mount: Deactivated successfully. Feb 13 19:38:15.025441 containerd[1462]: time="2025-02-13T19:38:15.025403597Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\"" Feb 13 19:38:15.025961 containerd[1462]: time="2025-02-13T19:38:15.025937311Z" level=info msg="StartContainer for \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\"" Feb 13 19:38:15.059390 systemd[1]: Started cri-containerd-cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e.scope - libcontainer container cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e. Feb 13 19:38:15.084081 containerd[1462]: time="2025-02-13T19:38:15.084029927Z" level=info msg="StartContainer for \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\" returns successfully" Feb 13 19:38:15.094177 systemd[1]: cri-containerd-cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e.scope: Deactivated successfully. Feb 13 19:38:15.553588 containerd[1462]: time="2025-02-13T19:38:15.553524388Z" level=info msg="shim disconnected" id=cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e namespace=k8s.io Feb 13 19:38:15.553588 containerd[1462]: time="2025-02-13T19:38:15.553573751Z" level=warning msg="cleaning up after shim disconnected" id=cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e namespace=k8s.io Feb 13 19:38:15.553588 containerd[1462]: time="2025-02-13T19:38:15.553582086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:38:15.832316 kubelet[2645]: E0213 19:38:15.832175 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:15.835050 containerd[1462]: time="2025-02-13T19:38:15.834988495Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:38:15.851221 containerd[1462]: time="2025-02-13T19:38:15.851170874Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\"" Feb 13 19:38:15.851671 containerd[1462]: time="2025-02-13T19:38:15.851628835Z" level=info msg="StartContainer for \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\"" Feb 13 19:38:15.878376 systemd[1]: Started cri-containerd-62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2.scope - libcontainer container 62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2. Feb 13 19:38:15.903940 containerd[1462]: time="2025-02-13T19:38:15.903898145Z" level=info msg="StartContainer for \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\" returns successfully" Feb 13 19:38:15.915919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:38:15.916321 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:38:15.916391 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:38:15.923541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:38:15.923788 systemd[1]: cri-containerd-62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2.scope: Deactivated successfully. Feb 13 19:38:15.938943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:38:15.942911 containerd[1462]: time="2025-02-13T19:38:15.942848582Z" level=info msg="shim disconnected" id=62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2 namespace=k8s.io Feb 13 19:38:15.943011 containerd[1462]: time="2025-02-13T19:38:15.942914437Z" level=warning msg="cleaning up after shim disconnected" id=62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2 namespace=k8s.io Feb 13 19:38:15.943011 containerd[1462]: time="2025-02-13T19:38:15.942924245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:38:16.020720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e-rootfs.mount: Deactivated successfully. Feb 13 19:38:16.835936 kubelet[2645]: E0213 19:38:16.835883 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:16.838550 containerd[1462]: time="2025-02-13T19:38:16.838493665Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:38:16.859330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329970717.mount: Deactivated successfully. Feb 13 19:38:17.189404 containerd[1462]: time="2025-02-13T19:38:17.189356034Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\"" Feb 13 19:38:17.190117 containerd[1462]: time="2025-02-13T19:38:17.189986078Z" level=info msg="StartContainer for \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\"" Feb 13 19:38:17.221389 systemd[1]: Started cri-containerd-b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6.scope - libcontainer container b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6. Feb 13 19:38:17.254844 systemd[1]: cri-containerd-b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6.scope: Deactivated successfully. Feb 13 19:38:17.258796 containerd[1462]: time="2025-02-13T19:38:17.258702619Z" level=info msg="StartContainer for \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\" returns successfully" Feb 13 19:38:17.277559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6-rootfs.mount: Deactivated successfully. Feb 13 19:38:17.302878 containerd[1462]: time="2025-02-13T19:38:17.302635287Z" level=info msg="shim disconnected" id=b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6 namespace=k8s.io Feb 13 19:38:17.302878 containerd[1462]: time="2025-02-13T19:38:17.302693036Z" level=warning msg="cleaning up after shim disconnected" id=b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6 namespace=k8s.io Feb 13 19:38:17.302878 containerd[1462]: time="2025-02-13T19:38:17.302702423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:38:17.839056 kubelet[2645]: E0213 19:38:17.839012 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:17.840769 containerd[1462]: time="2025-02-13T19:38:17.840733082Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:38:18.275447 containerd[1462]: time="2025-02-13T19:38:18.275386049Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:18.277212 containerd[1462]: time="2025-02-13T19:38:18.277158643Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\"" Feb 13 19:38:18.277976 containerd[1462]: time="2025-02-13T19:38:18.277941555Z" level=info msg="StartContainer for \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\"" Feb 13 19:38:18.278764 containerd[1462]: time="2025-02-13T19:38:18.278711113Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:38:18.280482 containerd[1462]: time="2025-02-13T19:38:18.280348741Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:38:18.281772 containerd[1462]: time="2025-02-13T19:38:18.281740128Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.276343054s" Feb 13 19:38:18.281772 containerd[1462]: time="2025-02-13T19:38:18.281770585Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:38:18.285153 containerd[1462]: time="2025-02-13T19:38:18.285115645Z" level=info msg="CreateContainer within sandbox \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:38:18.304173 containerd[1462]: time="2025-02-13T19:38:18.304051984Z" level=info msg="CreateContainer within sandbox \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\"" Feb 13 19:38:18.305547 containerd[1462]: time="2025-02-13T19:38:18.304815299Z" level=info msg="StartContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\"" Feb 13 19:38:18.312493 systemd[1]: Started cri-containerd-aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785.scope - libcontainer container aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785. Feb 13 19:38:18.340475 systemd[1]: Started cri-containerd-c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3.scope - libcontainer container c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3. Feb 13 19:38:18.350390 systemd[1]: cri-containerd-aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785.scope: Deactivated successfully. Feb 13 19:38:18.352875 containerd[1462]: time="2025-02-13T19:38:18.352819457Z" level=info msg="StartContainer for \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\" returns successfully" Feb 13 19:38:18.377220 containerd[1462]: time="2025-02-13T19:38:18.377131073Z" level=info msg="StartContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" returns successfully" Feb 13 19:38:18.578731 containerd[1462]: time="2025-02-13T19:38:18.578586865Z" level=info msg="shim disconnected" id=aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785 namespace=k8s.io Feb 13 19:38:18.578731 containerd[1462]: time="2025-02-13T19:38:18.578649052Z" level=warning msg="cleaning up after shim disconnected" id=aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785 namespace=k8s.io Feb 13 19:38:18.578731 containerd[1462]: time="2025-02-13T19:38:18.578664050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:38:18.844436 kubelet[2645]: E0213 19:38:18.844317 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:18.846640 containerd[1462]: time="2025-02-13T19:38:18.846599381Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:38:18.847556 kubelet[2645]: E0213 19:38:18.847530 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:18.878620 containerd[1462]: time="2025-02-13T19:38:18.878557917Z" level=info msg="CreateContainer within sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\"" Feb 13 19:38:18.883273 containerd[1462]: time="2025-02-13T19:38:18.881563779Z" level=info msg="StartContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\"" Feb 13 19:38:18.890008 kubelet[2645]: I0213 19:38:18.889930 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7gx58" podStartSLOduration=1.6746960450000001 podStartE2EDuration="16.889906787s" podCreationTimestamp="2025-02-13 19:38:02 +0000 UTC" firstStartedPulling="2025-02-13 19:38:03.067462 +0000 UTC m=+15.376033412" lastFinishedPulling="2025-02-13 19:38:18.282672742 +0000 UTC m=+30.591244154" observedRunningTime="2025-02-13 19:38:18.854588615 +0000 UTC m=+31.163160027" watchObservedRunningTime="2025-02-13 19:38:18.889906787 +0000 UTC m=+31.198478199" Feb 13 19:38:18.950684 systemd[1]: Started cri-containerd-5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475.scope - libcontainer container 5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475. Feb 13 19:38:18.996603 containerd[1462]: time="2025-02-13T19:38:18.996569900Z" level=info msg="StartContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" returns successfully" Feb 13 19:38:19.111187 systemd[1]: Started sshd@8-10.0.0.63:22-10.0.0.1:52138.service - OpenSSH per-connection server daemon (10.0.0.1:52138). Feb 13 19:38:19.183174 kubelet[2645]: I0213 19:38:19.183108 2645 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:38:19.209360 kubelet[2645]: I0213 19:38:19.208481 2645 topology_manager.go:215] "Topology Admit Handler" podUID="f404a0b9-b8f7-49a1-a796-60eebb0cdff9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p5vmp" Feb 13 19:38:19.211323 kubelet[2645]: I0213 19:38:19.211185 2645 topology_manager.go:215] "Topology Admit Handler" podUID="9fdc9210-eb5c-4615-966c-27c932e9099f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9nxtf" Feb 13 19:38:19.217646 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 52138 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:19.218958 systemd[1]: Created slice kubepods-burstable-podf404a0b9_b8f7_49a1_a796_60eebb0cdff9.slice - libcontainer container kubepods-burstable-podf404a0b9_b8f7_49a1_a796_60eebb0cdff9.slice. Feb 13 19:38:19.221666 sshd-session[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:19.230732 systemd[1]: Created slice kubepods-burstable-pod9fdc9210_eb5c_4615_966c_27c932e9099f.slice - libcontainer container kubepods-burstable-pod9fdc9210_eb5c_4615_966c_27c932e9099f.slice. Feb 13 19:38:19.232596 systemd-logind[1445]: New session 8 of user core. Feb 13 19:38:19.239409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785-rootfs.mount: Deactivated successfully. Feb 13 19:38:19.245419 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:38:19.268871 kubelet[2645]: I0213 19:38:19.268768 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8lc\" (UniqueName: \"kubernetes.io/projected/9fdc9210-eb5c-4615-966c-27c932e9099f-kube-api-access-kw8lc\") pod \"coredns-7db6d8ff4d-9nxtf\" (UID: \"9fdc9210-eb5c-4615-966c-27c932e9099f\") " pod="kube-system/coredns-7db6d8ff4d-9nxtf" Feb 13 19:38:19.268871 kubelet[2645]: I0213 19:38:19.268840 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f404a0b9-b8f7-49a1-a796-60eebb0cdff9-config-volume\") pod \"coredns-7db6d8ff4d-p5vmp\" (UID: \"f404a0b9-b8f7-49a1-a796-60eebb0cdff9\") " pod="kube-system/coredns-7db6d8ff4d-p5vmp" Feb 13 19:38:19.268871 kubelet[2645]: I0213 19:38:19.268872 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh8g\" (UniqueName: \"kubernetes.io/projected/f404a0b9-b8f7-49a1-a796-60eebb0cdff9-kube-api-access-fjh8g\") pod \"coredns-7db6d8ff4d-p5vmp\" (UID: \"f404a0b9-b8f7-49a1-a796-60eebb0cdff9\") " pod="kube-system/coredns-7db6d8ff4d-p5vmp" Feb 13 19:38:19.269113 kubelet[2645]: I0213 19:38:19.268895 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fdc9210-eb5c-4615-966c-27c932e9099f-config-volume\") pod \"coredns-7db6d8ff4d-9nxtf\" (UID: \"9fdc9210-eb5c-4615-966c-27c932e9099f\") " pod="kube-system/coredns-7db6d8ff4d-9nxtf" Feb 13 19:38:19.406590 sshd[3431]: Connection closed by 10.0.0.1 port 52138 Feb 13 19:38:19.405389 sshd-session[3425]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:19.410141 systemd[1]: sshd@8-10.0.0.63:22-10.0.0.1:52138.service: Deactivated successfully. Feb 13 19:38:19.412551 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:38:19.413386 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:38:19.414900 systemd-logind[1445]: Removed session 8. Feb 13 19:38:19.526216 kubelet[2645]: E0213 19:38:19.526165 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:19.526919 containerd[1462]: time="2025-02-13T19:38:19.526884316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p5vmp,Uid:f404a0b9-b8f7-49a1-a796-60eebb0cdff9,Namespace:kube-system,Attempt:0,}" Feb 13 19:38:19.535690 kubelet[2645]: E0213 19:38:19.535668 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:19.536338 containerd[1462]: time="2025-02-13T19:38:19.536286994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nxtf,Uid:9fdc9210-eb5c-4615-966c-27c932e9099f,Namespace:kube-system,Attempt:0,}" Feb 13 19:38:19.848684 kubelet[2645]: E0213 19:38:19.848640 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:19.849375 kubelet[2645]: E0213 19:38:19.849033 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:19.870088 kubelet[2645]: I0213 19:38:19.870024 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wpvx8" podStartSLOduration=5.768353648 podStartE2EDuration="17.870005702s" podCreationTimestamp="2025-02-13 19:38:02 +0000 UTC" firstStartedPulling="2025-02-13 19:38:02.90356978 +0000 UTC m=+15.212141192" lastFinishedPulling="2025-02-13 19:38:15.005221834 +0000 UTC m=+27.313793246" observedRunningTime="2025-02-13 19:38:19.869639964 +0000 UTC m=+32.178211376" watchObservedRunningTime="2025-02-13 19:38:19.870005702 +0000 UTC m=+32.178577114" Feb 13 19:38:20.850495 kubelet[2645]: E0213 19:38:20.850455 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:21.852339 kubelet[2645]: E0213 19:38:21.852290 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:22.052493 systemd-networkd[1391]: cilium_host: Link UP Feb 13 19:38:22.052680 systemd-networkd[1391]: cilium_net: Link UP Feb 13 19:38:22.052901 systemd-networkd[1391]: cilium_net: Gained carrier Feb 13 19:38:22.053125 systemd-networkd[1391]: cilium_host: Gained carrier Feb 13 19:38:22.151265 systemd-networkd[1391]: cilium_net: Gained IPv6LL Feb 13 19:38:22.154903 systemd-networkd[1391]: cilium_vxlan: Link UP Feb 13 19:38:22.154913 systemd-networkd[1391]: cilium_vxlan: Gained carrier Feb 13 19:38:22.173390 systemd-networkd[1391]: cilium_host: Gained IPv6LL Feb 13 19:38:22.362270 kernel: NET: Registered PF_ALG protocol family Feb 13 19:38:23.045500 systemd-networkd[1391]: lxc_health: Link UP Feb 13 19:38:23.061623 systemd-networkd[1391]: lxc_health: Gained carrier Feb 13 19:38:23.150437 systemd-networkd[1391]: lxc4d6d4748d828: Link UP Feb 13 19:38:23.160355 kernel: eth0: renamed from tmp9b7cc Feb 13 19:38:23.167030 systemd-networkd[1391]: lxc4d6d4748d828: Gained carrier Feb 13 19:38:23.171478 systemd-networkd[1391]: lxccdc08282bc68: Link UP Feb 13 19:38:23.181437 kernel: eth0: renamed from tmpb4069 Feb 13 19:38:23.184971 systemd-networkd[1391]: lxccdc08282bc68: Gained carrier Feb 13 19:38:23.798451 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Feb 13 19:38:24.309481 systemd-networkd[1391]: lxc_health: Gained IPv6LL Feb 13 19:38:24.310515 systemd-networkd[1391]: lxc4d6d4748d828: Gained IPv6LL Feb 13 19:38:24.416192 systemd[1]: Started sshd@9-10.0.0.63:22-10.0.0.1:46644.service - OpenSSH per-connection server daemon (10.0.0.1:46644). Feb 13 19:38:24.465435 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 46644 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:24.466851 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:24.470759 systemd-logind[1445]: New session 9 of user core. Feb 13 19:38:24.477371 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:38:24.603899 sshd[3883]: Connection closed by 10.0.0.1 port 46644 Feb 13 19:38:24.605415 sshd-session[3881]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:24.610197 systemd[1]: sshd@9-10.0.0.63:22-10.0.0.1:46644.service: Deactivated successfully. Feb 13 19:38:24.612017 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:38:24.612891 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:38:24.613878 systemd-logind[1445]: Removed session 9. Feb 13 19:38:24.629722 systemd-networkd[1391]: lxccdc08282bc68: Gained IPv6LL Feb 13 19:38:24.832049 kubelet[2645]: E0213 19:38:24.832016 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:26.634297 containerd[1462]: time="2025-02-13T19:38:26.634183164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:26.634297 containerd[1462]: time="2025-02-13T19:38:26.634265979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:26.634297 containerd[1462]: time="2025-02-13T19:38:26.634280416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:26.634895 containerd[1462]: time="2025-02-13T19:38:26.634208131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:38:26.634895 containerd[1462]: time="2025-02-13T19:38:26.634373602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:26.634895 containerd[1462]: time="2025-02-13T19:38:26.634527471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:38:26.634895 containerd[1462]: time="2025-02-13T19:38:26.634606419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:26.634895 containerd[1462]: time="2025-02-13T19:38:26.634752153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:38:26.666393 systemd[1]: Started cri-containerd-9b7cc956e2822841fbc4350b94193c40db656ccc2d97616721aeea5768b2cf9d.scope - libcontainer container 9b7cc956e2822841fbc4350b94193c40db656ccc2d97616721aeea5768b2cf9d. Feb 13 19:38:26.668381 systemd[1]: Started cri-containerd-b40690524af836195ae028f7ef0acd3fad911d69648db494bed8723edb1eb0b5.scope - libcontainer container b40690524af836195ae028f7ef0acd3fad911d69648db494bed8723edb1eb0b5. Feb 13 19:38:26.679167 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:38:26.682099 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:38:26.704190 containerd[1462]: time="2025-02-13T19:38:26.704148355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p5vmp,Uid:f404a0b9-b8f7-49a1-a796-60eebb0cdff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7cc956e2822841fbc4350b94193c40db656ccc2d97616721aeea5768b2cf9d\"" Feb 13 19:38:26.705077 kubelet[2645]: E0213 19:38:26.704811 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:26.707180 containerd[1462]: time="2025-02-13T19:38:26.707152688Z" level=info msg="CreateContainer within sandbox \"9b7cc956e2822841fbc4350b94193c40db656ccc2d97616721aeea5768b2cf9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:38:26.710648 containerd[1462]: time="2025-02-13T19:38:26.710598610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nxtf,Uid:9fdc9210-eb5c-4615-966c-27c932e9099f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40690524af836195ae028f7ef0acd3fad911d69648db494bed8723edb1eb0b5\"" Feb 13 19:38:26.711346 kubelet[2645]: E0213 19:38:26.711321 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:26.714370 containerd[1462]: time="2025-02-13T19:38:26.714345518Z" level=info msg="CreateContainer within sandbox \"b40690524af836195ae028f7ef0acd3fad911d69648db494bed8723edb1eb0b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:38:26.731226 containerd[1462]: time="2025-02-13T19:38:26.730593436Z" level=info msg="CreateContainer within sandbox \"9b7cc956e2822841fbc4350b94193c40db656ccc2d97616721aeea5768b2cf9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1078245b1f5310a83a14f1ae7c68cc60689703a1f6ba8cf3345096871f1ed5c8\"" Feb 13 19:38:26.731802 containerd[1462]: time="2025-02-13T19:38:26.731733957Z" level=info msg="StartContainer for \"1078245b1f5310a83a14f1ae7c68cc60689703a1f6ba8cf3345096871f1ed5c8\"" Feb 13 19:38:26.748682 containerd[1462]: time="2025-02-13T19:38:26.748599615Z" level=info msg="CreateContainer within sandbox \"b40690524af836195ae028f7ef0acd3fad911d69648db494bed8723edb1eb0b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ed6a03468a19522b622112b7d28e332258536cf6a364fbb9e469bf07a26dd3e\"" Feb 13 19:38:26.750202 containerd[1462]: time="2025-02-13T19:38:26.749214350Z" level=info msg="StartContainer for \"1ed6a03468a19522b622112b7d28e332258536cf6a364fbb9e469bf07a26dd3e\"" Feb 13 19:38:26.762416 systemd[1]: Started cri-containerd-1078245b1f5310a83a14f1ae7c68cc60689703a1f6ba8cf3345096871f1ed5c8.scope - libcontainer container 1078245b1f5310a83a14f1ae7c68cc60689703a1f6ba8cf3345096871f1ed5c8. Feb 13 19:38:26.781373 systemd[1]: Started cri-containerd-1ed6a03468a19522b622112b7d28e332258536cf6a364fbb9e469bf07a26dd3e.scope - libcontainer container 1ed6a03468a19522b622112b7d28e332258536cf6a364fbb9e469bf07a26dd3e. Feb 13 19:38:26.808305 containerd[1462]: time="2025-02-13T19:38:26.808264539Z" level=info msg="StartContainer for \"1078245b1f5310a83a14f1ae7c68cc60689703a1f6ba8cf3345096871f1ed5c8\" returns successfully" Feb 13 19:38:26.813697 containerd[1462]: time="2025-02-13T19:38:26.813647631Z" level=info msg="StartContainer for \"1ed6a03468a19522b622112b7d28e332258536cf6a364fbb9e469bf07a26dd3e\" returns successfully" Feb 13 19:38:26.863041 kubelet[2645]: E0213 19:38:26.863007 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:26.867643 kubelet[2645]: E0213 19:38:26.867620 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:26.881086 kubelet[2645]: I0213 19:38:26.881019 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9nxtf" podStartSLOduration=24.881000405 podStartE2EDuration="24.881000405s" podCreationTimestamp="2025-02-13 19:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:38:26.878838604 +0000 UTC m=+39.187410016" watchObservedRunningTime="2025-02-13 19:38:26.881000405 +0000 UTC m=+39.189571827" Feb 13 19:38:26.893985 kubelet[2645]: I0213 19:38:26.893832 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p5vmp" podStartSLOduration=24.893814253 podStartE2EDuration="24.893814253s" podCreationTimestamp="2025-02-13 19:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:38:26.892956612 +0000 UTC m=+39.201528024" watchObservedRunningTime="2025-02-13 19:38:26.893814253 +0000 UTC m=+39.202385665" Feb 13 19:38:27.868542 kubelet[2645]: E0213 19:38:27.868195 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:27.869054 kubelet[2645]: E0213 19:38:27.868625 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:28.870043 kubelet[2645]: E0213 19:38:28.870011 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:28.870043 kubelet[2645]: E0213 19:38:28.870059 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:29.616419 systemd[1]: Started sshd@10-10.0.0.63:22-10.0.0.1:57152.service - OpenSSH per-connection server daemon (10.0.0.1:57152). Feb 13 19:38:29.676065 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 57152 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:29.677954 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:29.682327 systemd-logind[1445]: New session 10 of user core. Feb 13 19:38:29.692409 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:38:29.847031 sshd[4079]: Connection closed by 10.0.0.1 port 57152 Feb 13 19:38:29.847423 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:29.851412 systemd[1]: sshd@10-10.0.0.63:22-10.0.0.1:57152.service: Deactivated successfully. Feb 13 19:38:29.853438 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:38:29.854011 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:38:29.854987 systemd-logind[1445]: Removed session 10. Feb 13 19:38:31.183128 kubelet[2645]: I0213 19:38:31.183061 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:38:31.184162 kubelet[2645]: E0213 19:38:31.184118 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:31.876377 kubelet[2645]: E0213 19:38:31.876338 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:38:34.857879 systemd[1]: Started sshd@11-10.0.0.63:22-10.0.0.1:57168.service - OpenSSH per-connection server daemon (10.0.0.1:57168). Feb 13 19:38:34.900184 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 57168 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:34.901591 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:34.905164 systemd-logind[1445]: New session 11 of user core. Feb 13 19:38:34.909355 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:38:35.012021 sshd[4096]: Connection closed by 10.0.0.1 port 57168 Feb 13 19:38:35.012379 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:35.023264 systemd[1]: sshd@11-10.0.0.63:22-10.0.0.1:57168.service: Deactivated successfully. Feb 13 19:38:35.025084 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:38:35.026502 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:38:35.031566 systemd[1]: Started sshd@12-10.0.0.63:22-10.0.0.1:57182.service - OpenSSH per-connection server daemon (10.0.0.1:57182). Feb 13 19:38:35.032633 systemd-logind[1445]: Removed session 11. Feb 13 19:38:35.069753 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 57182 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:35.071088 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:35.074822 systemd-logind[1445]: New session 12 of user core. Feb 13 19:38:35.085368 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:38:35.232690 sshd[4111]: Connection closed by 10.0.0.1 port 57182 Feb 13 19:38:35.233132 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:35.242040 systemd[1]: sshd@12-10.0.0.63:22-10.0.0.1:57182.service: Deactivated successfully. Feb 13 19:38:35.244086 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:38:35.248132 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:38:35.258607 systemd[1]: Started sshd@13-10.0.0.63:22-10.0.0.1:57184.service - OpenSSH per-connection server daemon (10.0.0.1:57184). Feb 13 19:38:35.259841 systemd-logind[1445]: Removed session 12. Feb 13 19:38:35.298631 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 57184 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:35.300174 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:35.304525 systemd-logind[1445]: New session 13 of user core. Feb 13 19:38:35.318485 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:38:35.431861 sshd[4123]: Connection closed by 10.0.0.1 port 57184 Feb 13 19:38:35.432312 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:35.436947 systemd[1]: sshd@13-10.0.0.63:22-10.0.0.1:57184.service: Deactivated successfully. Feb 13 19:38:35.439667 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:38:35.440456 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:38:35.441584 systemd-logind[1445]: Removed session 13. Feb 13 19:38:40.447087 systemd[1]: Started sshd@14-10.0.0.63:22-10.0.0.1:51272.service - OpenSSH per-connection server daemon (10.0.0.1:51272). Feb 13 19:38:40.489005 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 51272 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:40.490474 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:40.494476 systemd-logind[1445]: New session 14 of user core. Feb 13 19:38:40.503367 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:38:40.609665 sshd[4137]: Connection closed by 10.0.0.1 port 51272 Feb 13 19:38:40.610007 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:40.613612 systemd[1]: sshd@14-10.0.0.63:22-10.0.0.1:51272.service: Deactivated successfully. Feb 13 19:38:40.615673 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:38:40.616427 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:38:40.617220 systemd-logind[1445]: Removed session 14. Feb 13 19:38:45.620881 systemd[1]: Started sshd@15-10.0.0.63:22-10.0.0.1:51286.service - OpenSSH per-connection server daemon (10.0.0.1:51286). Feb 13 19:38:45.662198 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 51286 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:45.663448 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:45.666887 systemd-logind[1445]: New session 15 of user core. Feb 13 19:38:45.678354 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:38:45.781616 sshd[4151]: Connection closed by 10.0.0.1 port 51286 Feb 13 19:38:45.781957 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:45.793017 systemd[1]: sshd@15-10.0.0.63:22-10.0.0.1:51286.service: Deactivated successfully. Feb 13 19:38:45.794709 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:38:45.796597 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:38:45.802479 systemd[1]: Started sshd@16-10.0.0.63:22-10.0.0.1:51290.service - OpenSSH per-connection server daemon (10.0.0.1:51290). Feb 13 19:38:45.803317 systemd-logind[1445]: Removed session 15. Feb 13 19:38:45.841385 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:45.843099 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:45.847694 systemd-logind[1445]: New session 16 of user core. Feb 13 19:38:45.861372 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:38:46.038884 sshd[4165]: Connection closed by 10.0.0.1 port 51290 Feb 13 19:38:46.039363 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:46.048429 systemd[1]: sshd@16-10.0.0.63:22-10.0.0.1:51290.service: Deactivated successfully. Feb 13 19:38:46.050674 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:38:46.052666 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:38:46.058484 systemd[1]: Started sshd@17-10.0.0.63:22-10.0.0.1:51300.service - OpenSSH per-connection server daemon (10.0.0.1:51300). Feb 13 19:38:46.059481 systemd-logind[1445]: Removed session 16. Feb 13 19:38:46.101169 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 51300 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:46.102800 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:46.107269 systemd-logind[1445]: New session 17 of user core. Feb 13 19:38:46.117400 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:38:47.485555 sshd[4177]: Connection closed by 10.0.0.1 port 51300 Feb 13 19:38:47.487398 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:47.497087 systemd[1]: sshd@17-10.0.0.63:22-10.0.0.1:51300.service: Deactivated successfully. Feb 13 19:38:47.499068 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:38:47.500518 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:38:47.509621 systemd[1]: Started sshd@18-10.0.0.63:22-10.0.0.1:51306.service - OpenSSH per-connection server daemon (10.0.0.1:51306). Feb 13 19:38:47.510868 systemd-logind[1445]: Removed session 17. Feb 13 19:38:47.552711 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 51306 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:47.554273 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:47.558506 systemd-logind[1445]: New session 18 of user core. Feb 13 19:38:47.568398 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:38:47.801179 sshd[4200]: Connection closed by 10.0.0.1 port 51306 Feb 13 19:38:47.801790 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:47.812991 systemd[1]: sshd@18-10.0.0.63:22-10.0.0.1:51306.service: Deactivated successfully. Feb 13 19:38:47.815039 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:38:47.816858 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:38:47.833685 systemd[1]: Started sshd@19-10.0.0.63:22-10.0.0.1:51310.service - OpenSSH per-connection server daemon (10.0.0.1:51310). Feb 13 19:38:47.834712 systemd-logind[1445]: Removed session 18. Feb 13 19:38:47.872748 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 51310 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:47.874421 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:47.878880 systemd-logind[1445]: New session 19 of user core. Feb 13 19:38:47.887423 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:38:48.001697 sshd[4214]: Connection closed by 10.0.0.1 port 51310 Feb 13 19:38:48.002090 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:48.006173 systemd[1]: sshd@19-10.0.0.63:22-10.0.0.1:51310.service: Deactivated successfully. Feb 13 19:38:48.007946 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:38:48.008653 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:38:48.009489 systemd-logind[1445]: Removed session 19. Feb 13 19:38:53.015059 systemd[1]: Started sshd@20-10.0.0.63:22-10.0.0.1:57712.service - OpenSSH per-connection server daemon (10.0.0.1:57712). Feb 13 19:38:53.057134 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 57712 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:53.058699 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:53.062838 systemd-logind[1445]: New session 20 of user core. Feb 13 19:38:53.080425 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:38:53.211030 sshd[4229]: Connection closed by 10.0.0.1 port 57712 Feb 13 19:38:53.211397 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:53.214823 systemd[1]: sshd@20-10.0.0.63:22-10.0.0.1:57712.service: Deactivated successfully. Feb 13 19:38:53.216792 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:38:53.217540 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:38:53.219429 systemd-logind[1445]: Removed session 20. Feb 13 19:38:58.224071 systemd[1]: Started sshd@21-10.0.0.63:22-10.0.0.1:57716.service - OpenSSH per-connection server daemon (10.0.0.1:57716). Feb 13 19:38:58.267053 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 57716 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:38:58.268758 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:38:58.273072 systemd-logind[1445]: New session 21 of user core. Feb 13 19:38:58.278418 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:38:58.585114 sshd[4246]: Connection closed by 10.0.0.1 port 57716 Feb 13 19:38:58.585393 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Feb 13 19:38:58.589457 systemd[1]: sshd@21-10.0.0.63:22-10.0.0.1:57716.service: Deactivated successfully. Feb 13 19:38:58.591210 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:38:58.591916 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:38:58.592979 systemd-logind[1445]: Removed session 21. Feb 13 19:38:59.777519 kubelet[2645]: E0213 19:38:59.777472 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:03.601635 systemd[1]: Started sshd@22-10.0.0.63:22-10.0.0.1:35808.service - OpenSSH per-connection server daemon (10.0.0.1:35808). Feb 13 19:39:03.653227 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 35808 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:03.655188 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:03.660011 systemd-logind[1445]: New session 22 of user core. Feb 13 19:39:03.672512 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:39:03.784804 sshd[4262]: Connection closed by 10.0.0.1 port 35808 Feb 13 19:39:03.785163 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:03.789435 systemd[1]: sshd@22-10.0.0.63:22-10.0.0.1:35808.service: Deactivated successfully. Feb 13 19:39:03.791290 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:39:03.792058 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:39:03.793153 systemd-logind[1445]: Removed session 22. Feb 13 19:39:08.796290 systemd[1]: Started sshd@23-10.0.0.63:22-10.0.0.1:35810.service - OpenSSH per-connection server daemon (10.0.0.1:35810). Feb 13 19:39:08.838548 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 35810 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:08.839902 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:08.843745 systemd-logind[1445]: New session 23 of user core. Feb 13 19:39:08.856382 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:39:08.986120 sshd[4276]: Connection closed by 10.0.0.1 port 35810 Feb 13 19:39:08.986602 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:08.997940 systemd[1]: sshd@23-10.0.0.63:22-10.0.0.1:35810.service: Deactivated successfully. Feb 13 19:39:08.999608 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:39:09.000954 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:39:09.005510 systemd[1]: Started sshd@24-10.0.0.63:22-10.0.0.1:35812.service - OpenSSH per-connection server daemon (10.0.0.1:35812). Feb 13 19:39:09.006385 systemd-logind[1445]: Removed session 23. Feb 13 19:39:09.043753 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 35812 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:09.045207 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:09.049137 systemd-logind[1445]: New session 24 of user core. Feb 13 19:39:09.063373 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:39:10.383513 containerd[1462]: time="2025-02-13T19:39:10.383465159Z" level=info msg="StopContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" with timeout 30 (s)" Feb 13 19:39:10.385190 containerd[1462]: time="2025-02-13T19:39:10.385085985Z" level=info msg="Stop container \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" with signal terminated" Feb 13 19:39:10.400234 systemd[1]: cri-containerd-c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3.scope: Deactivated successfully. Feb 13 19:39:10.415140 containerd[1462]: time="2025-02-13T19:39:10.415088603Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:39:10.416485 containerd[1462]: time="2025-02-13T19:39:10.416454783Z" level=info msg="StopContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" with timeout 2 (s)" Feb 13 19:39:10.416763 containerd[1462]: time="2025-02-13T19:39:10.416739327Z" level=info msg="Stop container \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" with signal terminated" Feb 13 19:39:10.424628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3-rootfs.mount: Deactivated successfully. Feb 13 19:39:10.424990 systemd-networkd[1391]: lxc_health: Link DOWN Feb 13 19:39:10.424996 systemd-networkd[1391]: lxc_health: Lost carrier Feb 13 19:39:10.436174 containerd[1462]: time="2025-02-13T19:39:10.436090556Z" level=info msg="shim disconnected" id=c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3 namespace=k8s.io Feb 13 19:39:10.436174 containerd[1462]: time="2025-02-13T19:39:10.436147555Z" level=warning msg="cleaning up after shim disconnected" id=c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3 namespace=k8s.io Feb 13 19:39:10.436174 containerd[1462]: time="2025-02-13T19:39:10.436159258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:10.451744 systemd[1]: cri-containerd-5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475.scope: Deactivated successfully. Feb 13 19:39:10.452102 systemd[1]: cri-containerd-5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475.scope: Consumed 6.854s CPU time. Feb 13 19:39:10.454796 containerd[1462]: time="2025-02-13T19:39:10.454768930Z" level=info msg="StopContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" returns successfully" Feb 13 19:39:10.459040 containerd[1462]: time="2025-02-13T19:39:10.458994332Z" level=info msg="StopPodSandbox for \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\"" Feb 13 19:39:10.473879 containerd[1462]: time="2025-02-13T19:39:10.459059397Z" level=info msg="Container to stop \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.474486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475-rootfs.mount: Deactivated successfully. Feb 13 19:39:10.481480 containerd[1462]: time="2025-02-13T19:39:10.481419915Z" level=info msg="shim disconnected" id=5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475 namespace=k8s.io Feb 13 19:39:10.481480 containerd[1462]: time="2025-02-13T19:39:10.481468397Z" level=warning msg="cleaning up after shim disconnected" id=5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475 namespace=k8s.io Feb 13 19:39:10.481480 containerd[1462]: time="2025-02-13T19:39:10.481478086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:10.483558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd-shm.mount: Deactivated successfully. Feb 13 19:39:10.488679 systemd[1]: cri-containerd-ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd.scope: Deactivated successfully. Feb 13 19:39:10.498032 containerd[1462]: time="2025-02-13T19:39:10.497984351Z" level=info msg="StopContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" returns successfully" Feb 13 19:39:10.498562 containerd[1462]: time="2025-02-13T19:39:10.498519122Z" level=info msg="StopPodSandbox for \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\"" Feb 13 19:39:10.498676 containerd[1462]: time="2025-02-13T19:39:10.498564939Z" level=info msg="Container to stop \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.498676 containerd[1462]: time="2025-02-13T19:39:10.498596290Z" level=info msg="Container to stop \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.498676 containerd[1462]: time="2025-02-13T19:39:10.498604045Z" level=info msg="Container to stop \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.498676 containerd[1462]: time="2025-02-13T19:39:10.498612280Z" level=info msg="Container to stop \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.498676 containerd[1462]: time="2025-02-13T19:39:10.498621016Z" level=info msg="Container to stop \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:39:10.500513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f-shm.mount: Deactivated successfully. Feb 13 19:39:10.504582 systemd[1]: cri-containerd-30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f.scope: Deactivated successfully. Feb 13 19:39:10.516155 containerd[1462]: time="2025-02-13T19:39:10.516064472Z" level=info msg="shim disconnected" id=ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd namespace=k8s.io Feb 13 19:39:10.516155 containerd[1462]: time="2025-02-13T19:39:10.516142110Z" level=warning msg="cleaning up after shim disconnected" id=ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd namespace=k8s.io Feb 13 19:39:10.516155 containerd[1462]: time="2025-02-13T19:39:10.516150576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:10.525344 containerd[1462]: time="2025-02-13T19:39:10.525283193Z" level=info msg="shim disconnected" id=30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f namespace=k8s.io Feb 13 19:39:10.525344 containerd[1462]: time="2025-02-13T19:39:10.525334992Z" level=warning msg="cleaning up after shim disconnected" id=30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f namespace=k8s.io Feb 13 19:39:10.525344 containerd[1462]: time="2025-02-13T19:39:10.525343488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:10.530798 containerd[1462]: time="2025-02-13T19:39:10.530631020Z" level=info msg="TearDown network for sandbox \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\" successfully" Feb 13 19:39:10.530798 containerd[1462]: time="2025-02-13T19:39:10.530656559Z" level=info msg="StopPodSandbox for \"ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd\" returns successfully" Feb 13 19:39:10.537849 containerd[1462]: time="2025-02-13T19:39:10.537816838Z" level=info msg="TearDown network for sandbox \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" successfully" Feb 13 19:39:10.537849 containerd[1462]: time="2025-02-13T19:39:10.537836235Z" level=info msg="StopPodSandbox for \"30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f\" returns successfully" Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662346 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-hostproc\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662398 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e846631-8824-4c0e-9101-34901fd83c23-cilium-config-path\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662423 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9tr\" (UniqueName: \"kubernetes.io/projected/c826f83e-a16b-4534-b0cd-145cf8365f0d-kube-api-access-vp9tr\") pod \"c826f83e-a16b-4534-b0cd-145cf8365f0d\" (UID: \"c826f83e-a16b-4534-b0cd-145cf8365f0d\") " Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662440 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-hubble-tls\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662456 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cni-path\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663274 kubelet[2645]: I0213 19:39:10.662470 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e846631-8824-4c0e-9101-34901fd83c23-clustermesh-secrets\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663778 kubelet[2645]: I0213 19:39:10.662467 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.663778 kubelet[2645]: I0213 19:39:10.662485 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-kernel\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663778 kubelet[2645]: I0213 19:39:10.662520 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.663778 kubelet[2645]: I0213 19:39:10.662560 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-cgroup\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663778 kubelet[2645]: I0213 19:39:10.662579 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-bpf-maps\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662593 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-lib-modules\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662611 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c826f83e-a16b-4534-b0cd-145cf8365f0d-cilium-config-path\") pod \"c826f83e-a16b-4534-b0cd-145cf8365f0d\" (UID: \"c826f83e-a16b-4534-b0cd-145cf8365f0d\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662625 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-net\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662639 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-xtables-lock\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662652 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-etc-cni-netd\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.663901 kubelet[2645]: I0213 19:39:10.662668 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dflz9\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-kube-api-access-dflz9\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662681 2645 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-run\") pod \"9e846631-8824-4c0e-9101-34901fd83c23\" (UID: \"9e846631-8824-4c0e-9101-34901fd83c23\") " Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662723 2645 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662732 2645 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662753 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662777 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664037 kubelet[2645]: I0213 19:39:10.662797 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664168 kubelet[2645]: I0213 19:39:10.662812 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664168 kubelet[2645]: I0213 19:39:10.663070 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664549 kubelet[2645]: I0213 19:39:10.664438 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664549 kubelet[2645]: I0213 19:39:10.664468 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.664549 kubelet[2645]: I0213 19:39:10.664484 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:39:10.666349 kubelet[2645]: I0213 19:39:10.666285 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:10.666473 kubelet[2645]: I0213 19:39:10.666408 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c826f83e-a16b-4534-b0cd-145cf8365f0d-kube-api-access-vp9tr" (OuterVolumeSpecName: "kube-api-access-vp9tr") pod "c826f83e-a16b-4534-b0cd-145cf8365f0d" (UID: "c826f83e-a16b-4534-b0cd-145cf8365f0d"). InnerVolumeSpecName "kube-api-access-vp9tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:10.667349 kubelet[2645]: I0213 19:39:10.667327 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e846631-8824-4c0e-9101-34901fd83c23-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:39:10.667486 kubelet[2645]: I0213 19:39:10.667453 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e846631-8824-4c0e-9101-34901fd83c23-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:39:10.668633 kubelet[2645]: I0213 19:39:10.668599 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-kube-api-access-dflz9" (OuterVolumeSpecName: "kube-api-access-dflz9") pod "9e846631-8824-4c0e-9101-34901fd83c23" (UID: "9e846631-8824-4c0e-9101-34901fd83c23"). InnerVolumeSpecName "kube-api-access-dflz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:39:10.669424 kubelet[2645]: I0213 19:39:10.669398 2645 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c826f83e-a16b-4534-b0cd-145cf8365f0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c826f83e-a16b-4534-b0cd-145cf8365f0d" (UID: "c826f83e-a16b-4534-b0cd-145cf8365f0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:39:10.763232 kubelet[2645]: I0213 19:39:10.763198 2645 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763232 kubelet[2645]: I0213 19:39:10.763223 2645 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763232 kubelet[2645]: I0213 19:39:10.763231 2645 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763232 kubelet[2645]: I0213 19:39:10.763255 2645 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c826f83e-a16b-4534-b0cd-145cf8365f0d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763265 2645 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763273 2645 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763281 2645 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763288 2645 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dflz9\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-kube-api-access-dflz9\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763296 2645 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763304 2645 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e846631-8824-4c0e-9101-34901fd83c23-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763312 2645 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e846631-8824-4c0e-9101-34901fd83c23-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763410 kubelet[2645]: I0213 19:39:10.763319 2645 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e846631-8824-4c0e-9101-34901fd83c23-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763606 kubelet[2645]: I0213 19:39:10.763327 2645 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e846631-8824-4c0e-9101-34901fd83c23-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.763606 kubelet[2645]: I0213 19:39:10.763337 2645 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vp9tr\" (UniqueName: \"kubernetes.io/projected/c826f83e-a16b-4534-b0cd-145cf8365f0d-kube-api-access-vp9tr\") on node \"localhost\" DevicePath \"\"" Feb 13 19:39:10.971261 kubelet[2645]: I0213 19:39:10.971200 2645 scope.go:117] "RemoveContainer" containerID="c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3" Feb 13 19:39:10.977139 systemd[1]: Removed slice kubepods-besteffort-podc826f83e_a16b_4534_b0cd_145cf8365f0d.slice - libcontainer container kubepods-besteffort-podc826f83e_a16b_4534_b0cd_145cf8365f0d.slice. Feb 13 19:39:10.978395 containerd[1462]: time="2025-02-13T19:39:10.977897335Z" level=info msg="RemoveContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\"" Feb 13 19:39:10.982252 systemd[1]: Removed slice kubepods-burstable-pod9e846631_8824_4c0e_9101_34901fd83c23.slice - libcontainer container kubepods-burstable-pod9e846631_8824_4c0e_9101_34901fd83c23.slice. Feb 13 19:39:10.982474 systemd[1]: kubepods-burstable-pod9e846631_8824_4c0e_9101_34901fd83c23.slice: Consumed 6.953s CPU time. Feb 13 19:39:10.984937 containerd[1462]: time="2025-02-13T19:39:10.984892239Z" level=info msg="RemoveContainer for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" returns successfully" Feb 13 19:39:10.985294 kubelet[2645]: I0213 19:39:10.985121 2645 scope.go:117] "RemoveContainer" containerID="c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3" Feb 13 19:39:10.985355 containerd[1462]: time="2025-02-13T19:39:10.985325006Z" level=error msg="ContainerStatus for \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\": not found" Feb 13 19:39:10.985578 kubelet[2645]: E0213 19:39:10.985464 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\": not found" containerID="c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3" Feb 13 19:39:10.985754 kubelet[2645]: I0213 19:39:10.985661 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3"} err="failed to get container status \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2aadffe54f9ab5bab94b88ce56ca38e58cc83a0eed8f5c8b4e50b7e9722d5e3\": not found" Feb 13 19:39:10.985754 kubelet[2645]: I0213 19:39:10.985746 2645 scope.go:117] "RemoveContainer" containerID="5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475" Feb 13 19:39:10.987401 containerd[1462]: time="2025-02-13T19:39:10.987354894Z" level=info msg="RemoveContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\"" Feb 13 19:39:10.990856 containerd[1462]: time="2025-02-13T19:39:10.990812520Z" level=info msg="RemoveContainer for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" returns successfully" Feb 13 19:39:10.991052 kubelet[2645]: I0213 19:39:10.991012 2645 scope.go:117] "RemoveContainer" containerID="aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785" Feb 13 19:39:10.991959 containerd[1462]: time="2025-02-13T19:39:10.991932589Z" level=info msg="RemoveContainer for \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\"" Feb 13 19:39:10.995508 containerd[1462]: time="2025-02-13T19:39:10.995472862Z" level=info msg="RemoveContainer for \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\" returns successfully" Feb 13 19:39:10.995828 kubelet[2645]: I0213 19:39:10.995791 2645 scope.go:117] "RemoveContainer" containerID="b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6" Feb 13 19:39:10.996939 containerd[1462]: time="2025-02-13T19:39:10.996896372Z" level=info msg="RemoveContainer for \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\"" Feb 13 19:39:11.003509 containerd[1462]: time="2025-02-13T19:39:11.003459788Z" level=info msg="RemoveContainer for \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\" returns successfully" Feb 13 19:39:11.003745 kubelet[2645]: I0213 19:39:11.003711 2645 scope.go:117] "RemoveContainer" containerID="62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2" Feb 13 19:39:11.004876 containerd[1462]: time="2025-02-13T19:39:11.004831508Z" level=info msg="RemoveContainer for \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\"" Feb 13 19:39:11.007928 containerd[1462]: time="2025-02-13T19:39:11.007898515Z" level=info msg="RemoveContainer for \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\" returns successfully" Feb 13 19:39:11.008122 kubelet[2645]: I0213 19:39:11.008095 2645 scope.go:117] "RemoveContainer" containerID="cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e" Feb 13 19:39:11.012787 containerd[1462]: time="2025-02-13T19:39:11.012752784Z" level=info msg="RemoveContainer for \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\"" Feb 13 19:39:11.015723 containerd[1462]: time="2025-02-13T19:39:11.015690464Z" level=info msg="RemoveContainer for \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\" returns successfully" Feb 13 19:39:11.015883 kubelet[2645]: I0213 19:39:11.015846 2645 scope.go:117] "RemoveContainer" containerID="5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475" Feb 13 19:39:11.016155 containerd[1462]: time="2025-02-13T19:39:11.016108352Z" level=error msg="ContainerStatus for \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\": not found" Feb 13 19:39:11.016349 kubelet[2645]: E0213 19:39:11.016325 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\": not found" containerID="5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475" Feb 13 19:39:11.016397 kubelet[2645]: I0213 19:39:11.016358 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475"} err="failed to get container status \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b85c2e3b80d561686a38db7387594ef3d9fbb4974b43c46004ca01e684a5475\": not found" Feb 13 19:39:11.016397 kubelet[2645]: I0213 19:39:11.016384 2645 scope.go:117] "RemoveContainer" containerID="aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785" Feb 13 19:39:11.016594 containerd[1462]: time="2025-02-13T19:39:11.016559473Z" level=error msg="ContainerStatus for \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\": not found" Feb 13 19:39:11.016693 kubelet[2645]: E0213 19:39:11.016673 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\": not found" containerID="aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785" Feb 13 19:39:11.016731 kubelet[2645]: I0213 19:39:11.016694 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785"} err="failed to get container status \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa2e8635ba9320dcd03bb49987ae44c641ababd102570666071272286b085785\": not found" Feb 13 19:39:11.016731 kubelet[2645]: I0213 19:39:11.016706 2645 scope.go:117] "RemoveContainer" containerID="b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6" Feb 13 19:39:11.016909 containerd[1462]: time="2025-02-13T19:39:11.016870236Z" level=error msg="ContainerStatus for \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\": not found" Feb 13 19:39:11.017039 kubelet[2645]: E0213 19:39:11.017009 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\": not found" containerID="b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6" Feb 13 19:39:11.017083 kubelet[2645]: I0213 19:39:11.017036 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6"} err="failed to get container status \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b83d8aca1d02e437972b9728898ef8b97988679a60e98f3d255c4f9bc23558d6\": not found" Feb 13 19:39:11.017083 kubelet[2645]: I0213 19:39:11.017057 2645 scope.go:117] "RemoveContainer" containerID="62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2" Feb 13 19:39:11.017226 containerd[1462]: time="2025-02-13T19:39:11.017193344Z" level=error msg="ContainerStatus for \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\": not found" Feb 13 19:39:11.017352 kubelet[2645]: E0213 19:39:11.017334 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\": not found" containerID="62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2" Feb 13 19:39:11.017397 kubelet[2645]: I0213 19:39:11.017355 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2"} err="failed to get container status \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"62581b5ff9d520581c5f364c77436792f22cb029862def301fc8afb758bfa3c2\": not found" Feb 13 19:39:11.017397 kubelet[2645]: I0213 19:39:11.017369 2645 scope.go:117] "RemoveContainer" containerID="cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e" Feb 13 19:39:11.017552 containerd[1462]: time="2025-02-13T19:39:11.017514908Z" level=error msg="ContainerStatus for \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\": not found" Feb 13 19:39:11.017648 kubelet[2645]: E0213 19:39:11.017627 2645 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\": not found" containerID="cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e" Feb 13 19:39:11.017692 kubelet[2645]: I0213 19:39:11.017649 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e"} err="failed to get container status \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd50389cb1232cb19d967f2a54c02a88189792bcaa3a78bc810fe55bbde71e8e\": not found" Feb 13 19:39:11.391477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea4fe560f86a82d266c0abea3a0346194ac869084854a66549fc6a4c3e9918bd-rootfs.mount: Deactivated successfully. Feb 13 19:39:11.391610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30e41c93b441e9fde39cbf8c68b5a9b930f7900af8960dd7bc00d42b61ec531f-rootfs.mount: Deactivated successfully. Feb 13 19:39:11.391691 systemd[1]: var-lib-kubelet-pods-c826f83e\x2da16b\x2d4534\x2db0cd\x2d145cf8365f0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvp9tr.mount: Deactivated successfully. Feb 13 19:39:11.391769 systemd[1]: var-lib-kubelet-pods-9e846631\x2d8824\x2d4c0e\x2d9101\x2d34901fd83c23-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddflz9.mount: Deactivated successfully. Feb 13 19:39:11.391853 systemd[1]: var-lib-kubelet-pods-9e846631\x2d8824\x2d4c0e\x2d9101\x2d34901fd83c23-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:39:11.391925 systemd[1]: var-lib-kubelet-pods-9e846631\x2d8824\x2d4c0e\x2d9101\x2d34901fd83c23-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:39:11.779399 kubelet[2645]: I0213 19:39:11.779361 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e846631-8824-4c0e-9101-34901fd83c23" path="/var/lib/kubelet/pods/9e846631-8824-4c0e-9101-34901fd83c23/volumes" Feb 13 19:39:11.780184 kubelet[2645]: I0213 19:39:11.780161 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c826f83e-a16b-4534-b0cd-145cf8365f0d" path="/var/lib/kubelet/pods/c826f83e-a16b-4534-b0cd-145cf8365f0d/volumes" Feb 13 19:39:12.353518 sshd[4290]: Connection closed by 10.0.0.1 port 35812 Feb 13 19:39:12.354202 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:12.363293 systemd[1]: sshd@24-10.0.0.63:22-10.0.0.1:35812.service: Deactivated successfully. Feb 13 19:39:12.365319 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:39:12.366981 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:39:12.375746 systemd[1]: Started sshd@25-10.0.0.63:22-10.0.0.1:47540.service - OpenSSH per-connection server daemon (10.0.0.1:47540). Feb 13 19:39:12.376987 systemd-logind[1445]: Removed session 24. Feb 13 19:39:12.422955 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 47540 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:12.424371 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:12.429277 systemd-logind[1445]: New session 25 of user core. Feb 13 19:39:12.441416 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:39:12.829720 kubelet[2645]: E0213 19:39:12.829672 2645 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:39:13.053369 sshd[4450]: Connection closed by 10.0.0.1 port 47540 Feb 13 19:39:13.055138 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:13.063272 kubelet[2645]: I0213 19:39:13.063133 2645 topology_manager.go:215] "Topology Admit Handler" podUID="953fbdfc-0798-45cc-9724-c7af48df9da3" podNamespace="kube-system" podName="cilium-r8p7d" Feb 13 19:39:13.063272 kubelet[2645]: E0213 19:39:13.063194 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="clean-cilium-state" Feb 13 19:39:13.063272 kubelet[2645]: E0213 19:39:13.063207 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="mount-cgroup" Feb 13 19:39:13.063272 kubelet[2645]: E0213 19:39:13.063215 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="apply-sysctl-overwrites" Feb 13 19:39:13.063272 kubelet[2645]: E0213 19:39:13.063222 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="mount-bpf-fs" Feb 13 19:39:13.063272 kubelet[2645]: E0213 19:39:13.063229 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c826f83e-a16b-4534-b0cd-145cf8365f0d" containerName="cilium-operator" Feb 13 19:39:13.063821 kubelet[2645]: E0213 19:39:13.063583 2645 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="cilium-agent" Feb 13 19:39:13.063821 kubelet[2645]: I0213 19:39:13.063620 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="c826f83e-a16b-4534-b0cd-145cf8365f0d" containerName="cilium-operator" Feb 13 19:39:13.063821 kubelet[2645]: I0213 19:39:13.063628 2645 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e846631-8824-4c0e-9101-34901fd83c23" containerName="cilium-agent" Feb 13 19:39:13.068722 systemd[1]: sshd@25-10.0.0.63:22-10.0.0.1:47540.service: Deactivated successfully. Feb 13 19:39:13.074058 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:39:13.075110 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:39:13.092184 systemd[1]: Started sshd@26-10.0.0.63:22-10.0.0.1:47544.service - OpenSSH per-connection server daemon (10.0.0.1:47544). Feb 13 19:39:13.097768 systemd-logind[1445]: Removed session 25. Feb 13 19:39:13.105550 systemd[1]: Created slice kubepods-burstable-pod953fbdfc_0798_45cc_9724_c7af48df9da3.slice - libcontainer container kubepods-burstable-pod953fbdfc_0798_45cc_9724_c7af48df9da3.slice. Feb 13 19:39:13.133788 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 47544 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:13.135125 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:13.138586 systemd-logind[1445]: New session 26 of user core. Feb 13 19:39:13.145391 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:39:13.179149 kubelet[2645]: I0213 19:39:13.179116 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-cilium-run\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179149 kubelet[2645]: I0213 19:39:13.179150 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-etc-cni-netd\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179167 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/953fbdfc-0798-45cc-9724-c7af48df9da3-hubble-tls\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179182 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-cni-path\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179208 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/953fbdfc-0798-45cc-9724-c7af48df9da3-cilium-config-path\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179224 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-hostproc\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179238 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-cilium-cgroup\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179314 kubelet[2645]: I0213 19:39:13.179277 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-lib-modules\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179517 kubelet[2645]: I0213 19:39:13.179291 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-xtables-lock\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179517 kubelet[2645]: I0213 19:39:13.179307 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/953fbdfc-0798-45cc-9724-c7af48df9da3-clustermesh-secrets\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179517 kubelet[2645]: I0213 19:39:13.179322 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mf6f\" (UniqueName: \"kubernetes.io/projected/953fbdfc-0798-45cc-9724-c7af48df9da3-kube-api-access-8mf6f\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179517 kubelet[2645]: I0213 19:39:13.179337 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/953fbdfc-0798-45cc-9724-c7af48df9da3-cilium-ipsec-secrets\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179517 kubelet[2645]: I0213 19:39:13.179350 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-host-proc-sys-net\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179669 kubelet[2645]: I0213 19:39:13.179385 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-host-proc-sys-kernel\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.179669 kubelet[2645]: I0213 19:39:13.179426 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/953fbdfc-0798-45cc-9724-c7af48df9da3-bpf-maps\") pod \"cilium-r8p7d\" (UID: \"953fbdfc-0798-45cc-9724-c7af48df9da3\") " pod="kube-system/cilium-r8p7d" Feb 13 19:39:13.195214 sshd[4463]: Connection closed by 10.0.0.1 port 47544 Feb 13 19:39:13.195577 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:13.207086 systemd[1]: sshd@26-10.0.0.63:22-10.0.0.1:47544.service: Deactivated successfully. Feb 13 19:39:13.208904 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:39:13.210376 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:39:13.219464 systemd[1]: Started sshd@27-10.0.0.63:22-10.0.0.1:47558.service - OpenSSH per-connection server daemon (10.0.0.1:47558). Feb 13 19:39:13.223042 systemd-logind[1445]: Removed session 26. Feb 13 19:39:13.258258 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 47558 ssh2: RSA SHA256:Uh4KadtCLzIKC55xBX+WFJWCeY6fGIIe31vecjZIJAI Feb 13 19:39:13.259600 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:39:13.263314 systemd-logind[1445]: New session 27 of user core. Feb 13 19:39:13.276368 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:39:13.408793 kubelet[2645]: E0213 19:39:13.408667 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:13.409256 containerd[1462]: time="2025-02-13T19:39:13.409188665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8p7d,Uid:953fbdfc-0798-45cc-9724-c7af48df9da3,Namespace:kube-system,Attempt:0,}" Feb 13 19:39:13.620702 containerd[1462]: time="2025-02-13T19:39:13.620094329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:39:13.620702 containerd[1462]: time="2025-02-13T19:39:13.620678163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:39:13.620838 containerd[1462]: time="2025-02-13T19:39:13.620691088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:39:13.620838 containerd[1462]: time="2025-02-13T19:39:13.620772813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:39:13.640368 systemd[1]: Started cri-containerd-b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be.scope - libcontainer container b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be. Feb 13 19:39:13.659685 containerd[1462]: time="2025-02-13T19:39:13.659576978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8p7d,Uid:953fbdfc-0798-45cc-9724-c7af48df9da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\"" Feb 13 19:39:13.660490 kubelet[2645]: E0213 19:39:13.660465 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:13.662510 containerd[1462]: time="2025-02-13T19:39:13.662394904Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:39:13.674017 containerd[1462]: time="2025-02-13T19:39:13.673970529Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7\"" Feb 13 19:39:13.674414 containerd[1462]: time="2025-02-13T19:39:13.674381042Z" level=info msg="StartContainer for \"ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7\"" Feb 13 19:39:13.708421 systemd[1]: Started cri-containerd-ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7.scope - libcontainer container ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7. Feb 13 19:39:13.735606 containerd[1462]: time="2025-02-13T19:39:13.735562509Z" level=info msg="StartContainer for \"ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7\" returns successfully" Feb 13 19:39:13.746122 systemd[1]: cri-containerd-ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7.scope: Deactivated successfully. Feb 13 19:39:13.776672 containerd[1462]: time="2025-02-13T19:39:13.776582090Z" level=info msg="shim disconnected" id=ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7 namespace=k8s.io Feb 13 19:39:13.776672 containerd[1462]: time="2025-02-13T19:39:13.776651622Z" level=warning msg="cleaning up after shim disconnected" id=ac798fabb60f2e5b04440b54f830853775814d28a5c0ff594e3e4055851d35e7 namespace=k8s.io Feb 13 19:39:13.776672 containerd[1462]: time="2025-02-13T19:39:13.776661200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:13.984471 kubelet[2645]: E0213 19:39:13.984278 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:13.986915 containerd[1462]: time="2025-02-13T19:39:13.986866248Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:39:14.000492 containerd[1462]: time="2025-02-13T19:39:14.000446266Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e\"" Feb 13 19:39:14.001023 containerd[1462]: time="2025-02-13T19:39:14.000954075Z" level=info msg="StartContainer for \"fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e\"" Feb 13 19:39:14.028370 systemd[1]: Started cri-containerd-fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e.scope - libcontainer container fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e. Feb 13 19:39:14.053696 containerd[1462]: time="2025-02-13T19:39:14.053664078Z" level=info msg="StartContainer for \"fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e\" returns successfully" Feb 13 19:39:14.060153 systemd[1]: cri-containerd-fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e.scope: Deactivated successfully. Feb 13 19:39:14.083740 containerd[1462]: time="2025-02-13T19:39:14.083665133Z" level=info msg="shim disconnected" id=fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e namespace=k8s.io Feb 13 19:39:14.083740 containerd[1462]: time="2025-02-13T19:39:14.083733172Z" level=warning msg="cleaning up after shim disconnected" id=fcae6f674462b3895bc91a06c233bce8d3480060d7dfe79ba5b23313a2d1539e namespace=k8s.io Feb 13 19:39:14.083740 containerd[1462]: time="2025-02-13T19:39:14.083743892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:14.990101 kubelet[2645]: E0213 19:39:14.990062 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:14.992488 containerd[1462]: time="2025-02-13T19:39:14.992456494Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:39:15.014868 containerd[1462]: time="2025-02-13T19:39:15.014815705Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5\"" Feb 13 19:39:15.015480 containerd[1462]: time="2025-02-13T19:39:15.015443952Z" level=info msg="StartContainer for \"f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5\"" Feb 13 19:39:15.044381 systemd[1]: Started cri-containerd-f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5.scope - libcontainer container f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5. Feb 13 19:39:15.074853 systemd[1]: cri-containerd-f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5.scope: Deactivated successfully. Feb 13 19:39:15.076054 containerd[1462]: time="2025-02-13T19:39:15.076016281Z" level=info msg="StartContainer for \"f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5\" returns successfully" Feb 13 19:39:15.111805 containerd[1462]: time="2025-02-13T19:39:15.111742587Z" level=info msg="shim disconnected" id=f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5 namespace=k8s.io Feb 13 19:39:15.111805 containerd[1462]: time="2025-02-13T19:39:15.111801008Z" level=warning msg="cleaning up after shim disconnected" id=f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5 namespace=k8s.io Feb 13 19:39:15.112011 containerd[1462]: time="2025-02-13T19:39:15.111810176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:15.285163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1b5117fab5467774f0514e8747b0ba5fec24c87f50fd60a31bf48690e6708a5-rootfs.mount: Deactivated successfully. Feb 13 19:39:15.993734 kubelet[2645]: E0213 19:39:15.993698 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:15.995354 containerd[1462]: time="2025-02-13T19:39:15.995311473Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:39:16.009448 containerd[1462]: time="2025-02-13T19:39:16.009391538Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6\"" Feb 13 19:39:16.009934 containerd[1462]: time="2025-02-13T19:39:16.009906930Z" level=info msg="StartContainer for \"e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6\"" Feb 13 19:39:16.044472 systemd[1]: Started cri-containerd-e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6.scope - libcontainer container e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6. Feb 13 19:39:16.069435 systemd[1]: cri-containerd-e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6.scope: Deactivated successfully. Feb 13 19:39:16.071881 containerd[1462]: time="2025-02-13T19:39:16.071829708Z" level=info msg="StartContainer for \"e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6\" returns successfully" Feb 13 19:39:16.096001 containerd[1462]: time="2025-02-13T19:39:16.095932056Z" level=info msg="shim disconnected" id=e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6 namespace=k8s.io Feb 13 19:39:16.096001 containerd[1462]: time="2025-02-13T19:39:16.095990116Z" level=warning msg="cleaning up after shim disconnected" id=e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6 namespace=k8s.io Feb 13 19:39:16.096001 containerd[1462]: time="2025-02-13T19:39:16.095999775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:39:16.285175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9e244aad24e5bcc4d61638449298e02014ce3cd71647b47c32f93beed5a43c6-rootfs.mount: Deactivated successfully. Feb 13 19:39:16.997040 kubelet[2645]: E0213 19:39:16.997010 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:16.999385 containerd[1462]: time="2025-02-13T19:39:16.999011627Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:39:17.146412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89668574.mount: Deactivated successfully. Feb 13 19:39:17.212877 containerd[1462]: time="2025-02-13T19:39:17.212816451Z" level=info msg="CreateContainer within sandbox \"b6e2847a651c4d4ff8d38225ed08ceac3494bb6c02691d0cfaebb16f7c2a68be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea\"" Feb 13 19:39:17.213477 containerd[1462]: time="2025-02-13T19:39:17.213338144Z" level=info msg="StartContainer for \"893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea\"" Feb 13 19:39:17.243400 systemd[1]: Started cri-containerd-893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea.scope - libcontainer container 893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea. Feb 13 19:39:17.275486 containerd[1462]: time="2025-02-13T19:39:17.275356589Z" level=info msg="StartContainer for \"893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea\" returns successfully" Feb 13 19:39:17.295547 systemd[1]: run-containerd-runc-k8s.io-893cc121fafd1e5610b4eb4cd9ac59ea26f6daff97185eefe438f7df7afb92ea-runc.5K2jys.mount: Deactivated successfully. Feb 13 19:39:17.768280 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:39:18.001187 kubelet[2645]: E0213 19:39:18.001129 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:18.014435 kubelet[2645]: I0213 19:39:18.014380 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r8p7d" podStartSLOduration=5.014361701 podStartE2EDuration="5.014361701s" podCreationTimestamp="2025-02-13 19:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:39:18.013852101 +0000 UTC m=+90.322423513" watchObservedRunningTime="2025-02-13 19:39:18.014361701 +0000 UTC m=+90.322933113" Feb 13 19:39:18.776808 kubelet[2645]: E0213 19:39:18.776767 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:19.410731 kubelet[2645]: E0213 19:39:19.410683 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:20.855476 systemd-networkd[1391]: lxc_health: Link UP Feb 13 19:39:20.863855 systemd-networkd[1391]: lxc_health: Gained carrier Feb 13 19:39:21.411534 kubelet[2645]: E0213 19:39:21.411495 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:22.007436 kubelet[2645]: E0213 19:39:22.007393 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:39:22.357490 systemd-networkd[1391]: lxc_health: Gained IPv6LL Feb 13 19:39:28.074556 sshd[4472]: Connection closed by 10.0.0.1 port 47558 Feb 13 19:39:28.074951 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Feb 13 19:39:28.078668 systemd[1]: sshd@27-10.0.0.63:22-10.0.0.1:47558.service: Deactivated successfully. Feb 13 19:39:28.080727 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:39:28.081510 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:39:28.082510 systemd-logind[1445]: Removed session 27. Feb 13 19:39:28.777339 kubelet[2645]: E0213 19:39:28.777284 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"