Jan 13 20:33:11.070014 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:33:11.070043 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:33:11.070055 kernel: BIOS-provided physical RAM map: Jan 13 20:33:11.070061 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:33:11.070067 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:33:11.070073 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:33:11.070080 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:33:11.070087 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:33:11.070093 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:33:11.070101 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:33:11.070107 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:33:11.070113 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:33:11.070119 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:33:11.070126 kernel: NX (Execute Disable) protection: active Jan 13 20:33:11.070133 kernel: APIC: Static calls initialized Jan 13 20:33:11.070143 kernel: SMBIOS 2.8 present. Jan 13 20:33:11.070149 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:33:11.070156 kernel: Hypervisor detected: KVM Jan 13 20:33:11.070163 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:33:11.070169 kernel: kvm-clock: using sched offset of 2244631194 cycles Jan 13 20:33:11.070176 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:33:11.070183 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:33:11.070190 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:33:11.070197 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:33:11.070204 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:33:11.070213 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:33:11.070220 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:33:11.070233 kernel: Using GB pages for direct mapping Jan 13 20:33:11.070240 kernel: ACPI: Early table checksum verification disabled Jan 13 20:33:11.070247 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:33:11.070254 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070261 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070268 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070277 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:33:11.070283 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070290 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070297 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070311 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:33:11.070325 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:33:11.070345 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:33:11.070356 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:33:11.070365 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:33:11.070372 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:33:11.070379 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:33:11.070386 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:33:11.070393 kernel: No NUMA configuration found Jan 13 20:33:11.070400 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:33:11.070407 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:33:11.070416 kernel: Zone ranges: Jan 13 20:33:11.070427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:33:11.070434 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:33:11.070441 kernel: Normal empty Jan 13 20:33:11.070448 kernel: Movable zone start for each node Jan 13 20:33:11.070455 kernel: Early memory node ranges Jan 13 20:33:11.070462 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:33:11.070469 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:33:11.070476 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:33:11.070486 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:33:11.070493 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:33:11.070500 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:33:11.070507 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:33:11.070514 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:33:11.070521 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:33:11.070528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:33:11.070535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:33:11.070542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:33:11.070551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:33:11.070558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:33:11.070565 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:33:11.070572 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:33:11.070579 kernel: TSC deadline timer available Jan 13 20:33:11.070586 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:33:11.070593 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:33:11.070600 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:33:11.070607 kernel: kvm-guest: setup PV sched yield Jan 13 20:33:11.070614 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:33:11.070623 kernel: Booting paravirtualized kernel on KVM Jan 13 20:33:11.070631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:33:11.070638 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:33:11.070645 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:33:11.070652 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:33:11.070659 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:33:11.070666 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:33:11.070673 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:33:11.070681 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:33:11.070691 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:33:11.070698 kernel: random: crng init done Jan 13 20:33:11.070705 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:33:11.070712 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:33:11.070719 kernel: Fallback order for Node 0: 0 Jan 13 20:33:11.070726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:33:11.070733 kernel: Policy zone: DMA32 Jan 13 20:33:11.070740 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:33:11.070749 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Jan 13 20:33:11.070757 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:33:11.070764 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:33:11.070771 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:33:11.070778 kernel: Dynamic Preempt: voluntary Jan 13 20:33:11.070785 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:33:11.070792 kernel: rcu: RCU event tracing is enabled. Jan 13 20:33:11.070800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:33:11.070807 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:33:11.070816 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:33:11.070823 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:33:11.070830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:33:11.070838 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:33:11.070845 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:33:11.070852 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:33:11.070859 kernel: Console: colour VGA+ 80x25 Jan 13 20:33:11.070866 kernel: printk: console [ttyS0] enabled Jan 13 20:33:11.070873 kernel: ACPI: Core revision 20230628 Jan 13 20:33:11.070882 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:33:11.070889 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:33:11.070896 kernel: x2apic enabled Jan 13 20:33:11.070903 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:33:11.070910 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:33:11.070918 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:33:11.070925 kernel: kvm-guest: setup PV IPIs Jan 13 20:33:11.070953 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:33:11.070961 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:33:11.070968 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:33:11.070975 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:33:11.070983 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:33:11.070992 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:33:11.071000 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:33:11.071007 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:33:11.071015 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:33:11.071022 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:33:11.071032 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:33:11.071039 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:33:11.071047 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:33:11.071054 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:33:11.071062 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:33:11.071070 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:33:11.071077 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:33:11.071085 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:33:11.071094 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:33:11.071102 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:33:11.071109 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:33:11.071117 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:33:11.071124 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:33:11.071132 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:33:11.071139 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:33:11.071146 kernel: landlock: Up and running. Jan 13 20:33:11.071154 kernel: SELinux: Initializing. Jan 13 20:33:11.071163 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:33:11.071171 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:33:11.071178 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:33:11.071186 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:33:11.071207 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:33:11.071221 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:33:11.071236 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:33:11.071243 kernel: ... version: 0 Jan 13 20:33:11.071250 kernel: ... bit width: 48 Jan 13 20:33:11.071260 kernel: ... generic registers: 6 Jan 13 20:33:11.071268 kernel: ... value mask: 0000ffffffffffff Jan 13 20:33:11.071275 kernel: ... max period: 00007fffffffffff Jan 13 20:33:11.071282 kernel: ... fixed-purpose events: 0 Jan 13 20:33:11.071293 kernel: ... event mask: 000000000000003f Jan 13 20:33:11.071300 kernel: signal: max sigframe size: 1776 Jan 13 20:33:11.071308 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:33:11.071315 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:33:11.071323 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:33:11.071332 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:33:11.071340 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:33:11.071347 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:33:11.071354 kernel: smpboot: Max logical packages: 1 Jan 13 20:33:11.071362 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:33:11.071369 kernel: devtmpfs: initialized Jan 13 20:33:11.071376 kernel: x86/mm: Memory block size: 128MB Jan 13 20:33:11.071384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:33:11.071391 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:33:11.071401 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:33:11.071409 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:33:11.071416 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:33:11.071424 kernel: audit: type=2000 audit(1736800390.670:1): state=initialized audit_enabled=0 res=1 Jan 13 20:33:11.071431 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:33:11.071438 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:33:11.071446 kernel: cpuidle: using governor menu Jan 13 20:33:11.071453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:33:11.071461 kernel: dca service started, version 1.12.1 Jan 13 20:33:11.071470 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:33:11.071478 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:33:11.071485 kernel: PCI: Using configuration type 1 for base access Jan 13 20:33:11.071492 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:33:11.071500 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:33:11.071508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:33:11.071515 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:33:11.071522 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:33:11.071530 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:33:11.071539 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:33:11.071547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:33:11.071554 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:33:11.071561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:33:11.071569 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:33:11.071576 kernel: ACPI: Interpreter enabled Jan 13 20:33:11.071583 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:33:11.071591 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:33:11.071598 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:33:11.071608 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:33:11.071616 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:33:11.071623 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:33:11.071796 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:33:11.071927 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:33:11.072060 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:33:11.072071 kernel: PCI host bridge to bus 0000:00 Jan 13 20:33:11.072196 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:33:11.072315 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:33:11.072424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:33:11.072531 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:33:11.072637 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:33:11.072744 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:33:11.072852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:33:11.073049 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:33:11.073201 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:33:11.073351 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:33:11.073491 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:33:11.073630 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:33:11.073768 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:33:11.073918 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:33:11.074078 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:33:11.074218 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:33:11.074368 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:33:11.074516 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:33:11.074657 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:33:11.074788 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:33:11.074916 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:33:11.075094 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:33:11.075215 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:33:11.075344 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:33:11.075463 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:33:11.075582 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:33:11.075714 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:33:11.075839 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:33:11.075982 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:33:11.076103 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:33:11.076220 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:33:11.076357 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:33:11.076476 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:33:11.076486 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:33:11.076498 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:33:11.076505 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:33:11.076513 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:33:11.076520 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:33:11.076528 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:33:11.076535 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:33:11.076543 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:33:11.076550 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:33:11.076558 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:33:11.076567 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:33:11.076575 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:33:11.076582 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:33:11.076590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:33:11.076597 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:33:11.076604 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:33:11.076612 kernel: iommu: Default domain type: Translated Jan 13 20:33:11.076620 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:33:11.076627 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:33:11.076637 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:33:11.076644 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:33:11.076652 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:33:11.076770 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:33:11.076888 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:33:11.077032 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:33:11.077042 kernel: vgaarb: loaded Jan 13 20:33:11.077050 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:33:11.077061 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:33:11.077069 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:33:11.077077 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:33:11.077084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:33:11.077092 kernel: pnp: PnP ACPI init Jan 13 20:33:11.077218 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:33:11.077239 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:33:11.077247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:33:11.077258 kernel: NET: Registered PF_INET protocol family Jan 13 20:33:11.077266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:33:11.077273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:33:11.077281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:33:11.077288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:33:11.077296 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:33:11.077304 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:33:11.077311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:33:11.077319 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:33:11.077328 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:33:11.077336 kernel: NET: Registered PF_XDP protocol family Jan 13 20:33:11.077448 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:33:11.077557 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:33:11.077665 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:33:11.077774 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:33:11.077881 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:33:11.078029 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:33:11.078043 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:33:11.078051 kernel: Initialise system trusted keyrings Jan 13 20:33:11.078059 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:33:11.078066 kernel: Key type asymmetric registered Jan 13 20:33:11.078074 kernel: Asymmetric key parser 'x509' registered Jan 13 20:33:11.078081 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:33:11.078089 kernel: io scheduler mq-deadline registered Jan 13 20:33:11.078096 kernel: io scheduler kyber registered Jan 13 20:33:11.078104 kernel: io scheduler bfq registered Jan 13 20:33:11.078114 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:33:11.078122 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:33:11.078130 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:33:11.078137 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:33:11.078145 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:33:11.078152 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:33:11.078160 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:33:11.078167 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:33:11.078175 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:33:11.078307 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:33:11.078419 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:33:11.078429 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:33:11.078538 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:33:10 UTC (1736800390) Jan 13 20:33:11.078655 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:33:11.078666 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:33:11.078673 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:33:11.078680 kernel: Segment Routing with IPv6 Jan 13 20:33:11.078692 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:33:11.078700 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:33:11.078707 kernel: Key type dns_resolver registered Jan 13 20:33:11.078714 kernel: IPI shorthand broadcast: enabled Jan 13 20:33:11.078722 kernel: sched_clock: Marking stable (542003391, 105428207)->(695945326, -48513728) Jan 13 20:33:11.078730 kernel: registered taskstats version 1 Jan 13 20:33:11.078737 kernel: Loading compiled-in X.509 certificates Jan 13 20:33:11.078745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:33:11.078752 kernel: Key type .fscrypt registered Jan 13 20:33:11.078762 kernel: Key type fscrypt-provisioning registered Jan 13 20:33:11.078769 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:33:11.078777 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:33:11.078784 kernel: ima: No architecture policies found Jan 13 20:33:11.078792 kernel: clk: Disabling unused clocks Jan 13 20:33:11.078799 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:33:11.078806 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:33:11.078814 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:33:11.078821 kernel: Run /init as init process Jan 13 20:33:11.078843 kernel: with arguments: Jan 13 20:33:11.078850 kernel: /init Jan 13 20:33:11.078858 kernel: with environment: Jan 13 20:33:11.078865 kernel: HOME=/ Jan 13 20:33:11.078872 kernel: TERM=linux Jan 13 20:33:11.078880 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:33:11.078892 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:33:11.078902 systemd[1]: Detected virtualization kvm. Jan 13 20:33:11.078913 systemd[1]: Detected architecture x86-64. Jan 13 20:33:11.078921 systemd[1]: Running in initrd. Jan 13 20:33:11.078928 systemd[1]: No hostname configured, using default hostname. Jan 13 20:33:11.078950 systemd[1]: Hostname set to . Jan 13 20:33:11.078959 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:33:11.078968 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:33:11.078976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:33:11.078984 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:33:11.078995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:33:11.079015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:33:11.079025 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:33:11.079034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:33:11.079044 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:33:11.079055 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:33:11.079063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:33:11.079071 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:33:11.079080 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:33:11.079088 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:33:11.079096 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:33:11.079104 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:33:11.079112 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:33:11.079123 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:33:11.079131 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:33:11.079140 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:33:11.079148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:33:11.079156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:33:11.079165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:33:11.079173 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:33:11.079181 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:33:11.079189 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:33:11.079200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:33:11.079209 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:33:11.079217 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:33:11.079232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:33:11.079240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:11.079251 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:33:11.079259 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:33:11.079268 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:33:11.079296 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 20:33:11.079318 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:33:11.079328 systemd-journald[194]: Journal started Jan 13 20:33:11.079348 systemd-journald[194]: Runtime Journal (/run/log/journal/4ed249d461fb44959f1d6bc33c9f02bb) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:33:11.068960 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:33:11.110211 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:33:11.110235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:33:11.110247 kernel: Bridge firewalling registered Jan 13 20:33:11.096603 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:33:11.109146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:33:11.110786 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:11.120098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:33:11.121270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:33:11.125657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:33:11.132440 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:33:11.136696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:33:11.139442 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:11.141327 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:33:11.144085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:33:11.153074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:33:11.169318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:33:11.169989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:33:11.179015 dracut-cmdline[226]: dracut-dracut-053 Jan 13 20:33:11.182120 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:33:11.211179 systemd-resolved[228]: Positive Trust Anchors: Jan 13 20:33:11.211195 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:33:11.211234 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:33:11.213620 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 13 20:33:11.214662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:33:11.262958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:33:11.324972 kernel: SCSI subsystem initialized Jan 13 20:33:11.335982 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:33:11.354978 kernel: iscsi: registered transport (tcp) Jan 13 20:33:11.375969 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:33:11.376016 kernel: QLogic iSCSI HBA Driver Jan 13 20:33:11.420956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:33:11.432162 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:33:11.485238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:33:11.485317 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:33:11.485334 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:33:11.532984 kernel: raid6: avx2x4 gen() 30285 MB/s Jan 13 20:33:11.591977 kernel: raid6: avx2x2 gen() 30517 MB/s Jan 13 20:33:11.609246 kernel: raid6: avx2x1 gen() 24745 MB/s Jan 13 20:33:11.609282 kernel: raid6: using algorithm avx2x2 gen() 30517 MB/s Jan 13 20:33:11.627052 kernel: raid6: .... xor() 19530 MB/s, rmw enabled Jan 13 20:33:11.627073 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:33:11.646962 kernel: xor: automatically using best checksumming function avx Jan 13 20:33:11.809976 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:33:11.822494 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:33:11.830118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:33:11.841935 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 13 20:33:11.846629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:33:11.856088 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:33:11.871096 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 13 20:33:11.908297 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:33:11.923104 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:33:11.991648 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:33:12.005115 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:33:12.014014 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:33:12.040517 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:33:12.040704 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:33:12.040721 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:33:12.040736 kernel: GPT:9289727 != 19775487 Jan 13 20:33:12.040750 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:33:12.040764 kernel: GPT:9289727 != 19775487 Jan 13 20:33:12.040785 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:33:12.040799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:33:12.019953 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:33:12.022125 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:33:12.025123 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:33:12.027123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:33:12.039139 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:33:12.058097 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:33:12.060920 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:33:12.060951 kernel: AES CTR mode by8 optimization enabled Jan 13 20:33:12.060961 kernel: libata version 3.00 loaded. Jan 13 20:33:12.065960 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:33:12.089894 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:33:12.089911 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:33:12.090085 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:33:12.090238 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Jan 13 20:33:12.090251 kernel: scsi host0: ahci Jan 13 20:33:12.090403 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 13 20:33:12.090414 kernel: scsi host1: ahci Jan 13 20:33:12.090558 kernel: scsi host2: ahci Jan 13 20:33:12.090719 kernel: scsi host3: ahci Jan 13 20:33:12.090880 kernel: scsi host4: ahci Jan 13 20:33:12.091686 kernel: scsi host5: ahci Jan 13 20:33:12.091831 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:33:12.091842 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:33:12.091852 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:33:12.091862 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:33:12.091872 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:33:12.091882 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:33:12.077852 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:33:12.089039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:33:12.107894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:33:12.108255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:33:12.122813 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:33:12.135090 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:33:12.135559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:33:12.135623 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:33:12.136288 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:33:12.136588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:33:12.136644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:12.136963 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:12.146645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:12.162304 disk-uuid[550]: Primary Header is updated. Jan 13 20:33:12.162304 disk-uuid[550]: Secondary Entries is updated. Jan 13 20:33:12.162304 disk-uuid[550]: Secondary Header is updated. Jan 13 20:33:12.205580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:33:12.205613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:33:12.209218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:12.225187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:33:12.241756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:33:12.403955 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:33:12.404041 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:33:12.404052 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:33:12.404062 kernel: ata3.00: applying bridge limits Jan 13 20:33:12.404072 kernel: ata3.00: configured for UDMA/100 Jan 13 20:33:12.404082 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:33:12.404974 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:33:12.405980 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:33:12.406969 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:33:12.407973 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:33:12.452972 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:33:12.466660 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:33:12.466680 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:33:13.196977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:33:13.197457 disk-uuid[552]: The operation has completed successfully. Jan 13 20:33:13.228686 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:33:13.228803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:33:13.255140 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:33:13.260725 sh[590]: Success Jan 13 20:33:13.274971 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:33:13.305820 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:33:13.320820 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:33:13.323952 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:33:13.335460 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:33:13.335518 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:33:13.335534 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:33:13.336520 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:33:13.337280 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:33:13.341690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:33:13.344095 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:33:13.358227 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:33:13.361183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:33:13.370131 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:33:13.370192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:33:13.370203 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:33:13.373971 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:33:13.383720 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:33:13.385562 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:33:13.413274 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:33:13.421111 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:33:13.472823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:33:13.474330 ignition[714]: Ignition 2.20.0 Jan 13 20:33:13.474336 ignition[714]: Stage: fetch-offline Jan 13 20:33:13.474368 ignition[714]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:13.474377 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:13.474457 ignition[714]: parsed url from cmdline: "" Jan 13 20:33:13.474461 ignition[714]: no config URL provided Jan 13 20:33:13.474466 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:33:13.474474 ignition[714]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:33:13.480194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:33:13.474502 ignition[714]: op(1): [started] loading QEMU firmware config module Jan 13 20:33:13.474508 ignition[714]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:33:13.489752 ignition[714]: op(1): [finished] loading QEMU firmware config module Jan 13 20:33:13.504288 systemd-networkd[779]: lo: Link UP Jan 13 20:33:13.504300 systemd-networkd[779]: lo: Gained carrier Jan 13 20:33:13.505862 systemd-networkd[779]: Enumeration completed Jan 13 20:33:13.506278 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:13.506281 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:33:13.506454 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:33:13.507092 systemd-networkd[779]: eth0: Link UP Jan 13 20:33:13.507096 systemd-networkd[779]: eth0: Gained carrier Jan 13 20:33:13.507102 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:13.518972 systemd[1]: Reached target network.target - Network. Jan 13 20:33:13.531003 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:33:13.543044 ignition[714]: parsing config with SHA512: fc4bdcf3f3eb9b21f6b6342fb804eff645dc7b7d5e242728b15460f3c7d3b0861cfc270d255a8cf12121f5e5b3cb399f568167c10c81080552b0a8baa9813ff3 Jan 13 20:33:13.546986 unknown[714]: fetched base config from "system" Jan 13 20:33:13.547611 unknown[714]: fetched user config from "qemu" Jan 13 20:33:13.548493 ignition[714]: fetch-offline: fetch-offline passed Jan 13 20:33:13.548598 ignition[714]: Ignition finished successfully Jan 13 20:33:13.551090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:33:13.554635 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:33:13.567100 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:33:13.582355 ignition[783]: Ignition 2.20.0 Jan 13 20:33:13.582366 ignition[783]: Stage: kargs Jan 13 20:33:13.582541 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:13.582552 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:13.583384 ignition[783]: kargs: kargs passed Jan 13 20:33:13.583432 ignition[783]: Ignition finished successfully Jan 13 20:33:13.590395 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:33:13.605101 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:33:13.616581 ignition[792]: Ignition 2.20.0 Jan 13 20:33:13.616592 ignition[792]: Stage: disks Jan 13 20:33:13.616753 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:13.616764 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:13.617585 ignition[792]: disks: disks passed Jan 13 20:33:13.619873 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:33:13.617626 ignition[792]: Ignition finished successfully Jan 13 20:33:13.621597 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:33:13.623128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:33:13.625258 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:33:13.626264 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:33:13.626642 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:33:13.634063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:33:13.655205 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:33:13.682622 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:33:13.696043 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:33:13.781007 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:33:13.781080 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:33:13.783257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:33:13.797010 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:33:13.799481 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:33:13.801921 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:33:13.801976 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:33:13.801998 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:33:13.808980 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (810) Jan 13 20:33:13.809043 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:33:13.810819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:33:13.810842 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:33:13.811876 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:33:13.814973 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:33:13.815144 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:33:13.816763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:33:13.853040 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:33:13.857649 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:33:13.861952 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:33:13.866308 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:33:13.957528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:33:13.969054 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:33:13.971031 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:33:13.976966 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:33:13.998483 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:33:14.011305 ignition[926]: INFO : Ignition 2.20.0 Jan 13 20:33:14.011305 ignition[926]: INFO : Stage: mount Jan 13 20:33:14.013230 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:14.013230 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:14.013230 ignition[926]: INFO : mount: mount passed Jan 13 20:33:14.013230 ignition[926]: INFO : Ignition finished successfully Jan 13 20:33:14.019313 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:33:14.033083 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:33:14.334607 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:33:14.346186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:33:14.353884 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 13 20:33:14.353919 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:33:14.353930 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:33:14.354756 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:33:14.357966 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:33:14.359800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:33:14.389969 ignition[956]: INFO : Ignition 2.20.0 Jan 13 20:33:14.389969 ignition[956]: INFO : Stage: files Jan 13 20:33:14.391803 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:14.391803 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:14.391803 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:33:14.396023 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:33:14.396023 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:33:14.396023 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:33:14.396023 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:33:14.396023 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:33:14.395223 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 20:33:14.404176 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:33:14.404176 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:33:14.404176 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:33:14.404176 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:33:14.433399 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:33:14.585810 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:33:14.585810 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:33:14.591186 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:33:15.075676 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:33:15.177818 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:33:15.177818 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:33:15.181899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:33:15.355157 systemd-networkd[779]: eth0: Gained IPv6LL Jan 13 20:33:15.495732 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:33:15.762216 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:33:15.762216 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 13 20:33:15.765845 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:33:15.803321 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:33:15.807700 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:33:15.809250 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:33:15.809250 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:33:15.809250 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:33:15.809250 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:33:15.809250 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:33:15.809250 ignition[956]: INFO : files: files passed Jan 13 20:33:15.809250 ignition[956]: INFO : Ignition finished successfully Jan 13 20:33:15.819974 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:33:15.831060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:33:15.832169 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:33:15.839193 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:33:15.839306 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:33:15.842863 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:33:15.844423 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:15.844423 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:15.849072 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:33:15.846483 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:33:15.849250 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:33:15.859061 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:33:15.882758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:33:15.882896 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:33:15.886137 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:33:15.886547 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:33:15.886935 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:33:15.887640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:33:15.906056 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:33:15.907782 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:33:15.921718 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:33:15.922306 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:33:15.924690 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:33:15.927282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:33:15.927414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:33:15.930780 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:33:15.933176 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:33:15.933748 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:33:15.936373 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:33:15.938617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:33:15.941012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:33:15.943405 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:33:15.945561 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:33:15.948302 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:33:15.950408 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:33:15.952242 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:33:15.952372 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:33:15.955619 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:33:15.956334 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:33:15.958840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:33:15.961540 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:33:15.964022 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:33:15.964160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:33:15.966929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:33:15.967068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:33:15.967650 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:33:15.970430 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:33:15.974990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:33:15.975481 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:33:15.978043 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:33:15.979674 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:33:15.979763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:33:15.981379 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:33:15.981464 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:33:15.983228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:33:15.983332 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:33:15.984841 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:33:15.984949 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:33:15.999072 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:33:15.999490 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:33:15.999597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:33:16.000498 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:33:16.003228 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:33:16.003369 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:33:16.005345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:33:16.005442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:33:16.015052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:33:16.016120 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:33:16.022199 ignition[1011]: INFO : Ignition 2.20.0 Jan 13 20:33:16.022199 ignition[1011]: INFO : Stage: umount Jan 13 20:33:16.023919 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:33:16.023919 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:33:16.023919 ignition[1011]: INFO : umount: umount passed Jan 13 20:33:16.023919 ignition[1011]: INFO : Ignition finished successfully Jan 13 20:33:16.028890 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:33:16.029434 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:33:16.029540 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:33:16.031079 systemd[1]: Stopped target network.target - Network. Jan 13 20:33:16.032586 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:33:16.032654 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:33:16.034487 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:33:16.034539 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:33:16.036264 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:33:16.036313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:33:16.038209 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:33:16.038259 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:33:16.040312 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:33:16.041959 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:33:16.046507 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:33:16.046667 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:33:16.049309 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:33:16.049376 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:33:16.051990 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 13 20:33:16.058986 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:33:16.060979 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:33:16.063632 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:33:16.064934 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:33:16.077036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:33:16.079065 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:33:16.079145 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:33:16.082955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:33:16.083972 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:16.086434 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:33:16.087559 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:33:16.090049 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:33:16.099884 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:33:16.100060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:33:16.117640 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:33:16.117818 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:33:16.130183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:33:16.130241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:33:16.130544 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:33:16.130591 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:33:16.133553 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:33:16.133613 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:33:16.137190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:33:16.137252 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:33:16.138176 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:33:16.138231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:33:16.157119 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:33:16.157543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:33:16.157602 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:33:16.159695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:33:16.159748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:16.164207 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:33:16.164354 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:33:16.241152 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:33:16.241290 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:33:16.242142 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:33:16.242423 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:33:16.242473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:33:16.262074 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:33:16.268548 systemd[1]: Switching root. Jan 13 20:33:16.299060 systemd-journald[194]: Journal stopped Jan 13 20:33:17.602615 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 20:33:17.602673 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:33:17.602690 kernel: SELinux: policy capability open_perms=1 Jan 13 20:33:17.602702 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:33:17.602713 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:33:17.602727 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:33:17.602738 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:33:17.602749 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:33:17.602760 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:33:17.602775 kernel: audit: type=1403 audit(1736800396.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:33:17.602791 systemd[1]: Successfully loaded SELinux policy in 40.504ms. Jan 13 20:33:17.602809 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.060ms. Jan 13 20:33:17.602822 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:33:17.602838 systemd[1]: Detected virtualization kvm. Jan 13 20:33:17.602853 systemd[1]: Detected architecture x86-64. Jan 13 20:33:17.602865 systemd[1]: Detected first boot. Jan 13 20:33:17.602877 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:33:17.602889 zram_generator::config[1078]: No configuration found. Jan 13 20:33:17.602906 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:33:17.602918 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:33:17.602932 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:33:17.602955 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:33:17.602971 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:33:17.602982 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:33:17.602994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:33:17.603006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:33:17.603018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:33:17.603030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:33:17.603042 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:33:17.603061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:33:17.603075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:33:17.603091 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:33:17.603103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:33:17.603115 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:33:17.603127 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:33:17.603140 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:33:17.603152 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:33:17.603164 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:33:17.603175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:33:17.603190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:33:17.603202 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:33:17.603215 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:33:17.603226 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:33:17.603238 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:33:17.603250 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:33:17.603262 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:33:17.603273 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:33:17.603285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:33:17.603299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:33:17.603311 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:33:17.603323 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:33:17.603335 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:33:17.603346 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:33:17.603358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:17.603370 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:33:17.603383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:33:17.603397 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:33:17.603409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:33:17.603421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:17.603433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:33:17.603445 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:33:17.603457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:17.603470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:33:17.603482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:17.603493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:33:17.603507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:17.603519 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:33:17.603532 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:33:17.603544 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:33:17.603556 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:33:17.603567 kernel: fuse: init (API version 7.39) Jan 13 20:33:17.603578 kernel: loop: module loaded Jan 13 20:33:17.603589 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:33:17.603604 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:33:17.603616 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:33:17.603639 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:33:17.603652 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:17.603680 systemd-journald[1166]: Collecting audit messages is disabled. Jan 13 20:33:17.603704 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:33:17.603716 kernel: ACPI: bus type drm_connector registered Jan 13 20:33:17.603731 systemd-journald[1166]: Journal started Jan 13 20:33:17.603752 systemd-journald[1166]: Runtime Journal (/run/log/journal/4ed249d461fb44959f1d6bc33c9f02bb) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:33:17.608331 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:33:17.610306 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:33:17.611772 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:33:17.612882 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:33:17.614107 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:33:17.615341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:33:17.616684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:33:17.618252 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:33:17.619807 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:33:17.620040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:33:17.621542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:17.621744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:17.623204 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:33:17.623406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:33:17.624788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:17.625006 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:17.626543 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:33:17.626742 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:33:17.628200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:17.628455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:17.630266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:33:17.632611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:33:17.634686 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:33:17.647796 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:33:17.655009 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:33:17.657641 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:33:17.659059 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:33:17.661533 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:33:17.665136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:33:17.666718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:33:17.672115 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:33:17.672672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:33:17.674912 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:33:17.682618 systemd-journald[1166]: Time spent on flushing to /var/log/journal/4ed249d461fb44959f1d6bc33c9f02bb is 24.152ms for 942 entries. Jan 13 20:33:17.682618 systemd-journald[1166]: System Journal (/var/log/journal/4ed249d461fb44959f1d6bc33c9f02bb) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:33:17.720337 systemd-journald[1166]: Received client request to flush runtime journal. Jan 13 20:33:17.684204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:33:17.687404 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:33:17.689037 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:33:17.693511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:33:17.703475 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:33:17.712932 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:33:17.716921 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:33:17.719337 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:33:17.722532 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:33:17.724822 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:33:17.726501 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 13 20:33:17.726519 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 13 20:33:17.733187 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:33:17.743087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:33:17.766189 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:33:17.776209 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:33:17.790746 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 13 20:33:17.790765 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jan 13 20:33:17.796143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:33:18.200125 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:33:18.215114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:33:18.238711 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Jan 13 20:33:18.254385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:33:18.269486 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:33:18.275120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:33:18.292016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1245) Jan 13 20:33:18.333466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:33:18.335653 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:33:18.343828 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:33:18.355109 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:33:18.361836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:33:18.362787 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:33:18.364356 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:33:18.364681 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:33:18.381981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:33:18.449998 systemd-networkd[1246]: lo: Link UP Jan 13 20:33:18.450016 systemd-networkd[1246]: lo: Gained carrier Jan 13 20:33:18.452271 systemd-networkd[1246]: Enumeration completed Jan 13 20:33:18.452746 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:18.452751 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:33:18.453638 systemd-networkd[1246]: eth0: Link UP Jan 13 20:33:18.453650 systemd-networkd[1246]: eth0: Gained carrier Jan 13 20:33:18.453664 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:33:18.528774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:33:18.532466 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:33:18.557154 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:33:18.621451 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:33:18.675956 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:33:18.702272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:33:18.709751 kernel: kvm_amd: TSC scaling supported Jan 13 20:33:18.709801 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:33:18.709818 kernel: kvm_amd: Nested Paging enabled Jan 13 20:33:18.709833 kernel: kvm_amd: LBR virtualization supported Jan 13 20:33:18.711662 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:33:18.714378 kernel: kvm_amd: Virtual GIF supported Jan 13 20:33:18.811097 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:33:18.868559 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:33:18.902276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:33:18.946266 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:33:18.982722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:33:18.985544 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:33:19.007268 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:33:19.039044 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:33:19.084106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:33:19.088515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:33:19.092327 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:33:19.092362 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:33:19.102910 systemd[1]: Reached target machines.target - Containers. Jan 13 20:33:19.113453 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:33:19.132265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:33:19.141368 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:33:19.151434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:19.158265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:33:19.167300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:33:19.174490 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:33:19.177517 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:33:19.202354 kernel: loop0: detected capacity change from 0 to 140992 Jan 13 20:33:19.208501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:33:19.252525 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:33:19.260269 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:33:19.288075 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:33:19.323993 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:33:19.394978 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:33:19.470130 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 20:33:19.545609 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:33:19.626969 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 20:33:19.652526 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:33:19.653298 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 13 20:33:19.663375 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:33:19.663391 systemd[1]: Reloading... Jan 13 20:33:19.760983 zram_generator::config[1341]: No configuration found. Jan 13 20:33:19.842120 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:33:19.899153 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 13 20:33:19.914709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:19.977851 systemd[1]: Reloading finished in 313 ms. Jan 13 20:33:19.999064 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:33:20.001076 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:33:20.002734 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:33:20.020106 systemd[1]: Starting ensure-sysext.service... Jan 13 20:33:20.022398 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:33:20.026167 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:33:20.026183 systemd[1]: Reloading... Jan 13 20:33:20.044621 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:33:20.045188 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:33:20.046220 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:33:20.046564 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 13 20:33:20.046662 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 13 20:33:20.050661 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:33:20.050675 systemd-tmpfiles[1385]: Skipping /boot Jan 13 20:33:20.067419 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:33:20.067584 systemd-tmpfiles[1385]: Skipping /boot Jan 13 20:33:20.074970 zram_generator::config[1412]: No configuration found. Jan 13 20:33:20.193374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:20.257533 systemd[1]: Reloading finished in 230 ms. Jan 13 20:33:20.282040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:33:20.295547 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:20.298398 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:33:20.301078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:33:20.307069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:33:20.312137 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:33:20.321341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.321709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:20.324011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:20.330308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:20.337272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:20.340882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:20.341334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.342562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:20.344668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:20.352830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:20.353345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:20.357677 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:33:20.360459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:20.360866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:20.375637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:33:20.375997 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:33:20.392449 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:33:20.400048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.400402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:20.407172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:20.412733 augenrules[1495]: No rules Jan 13 20:33:20.441533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:20.443633 systemd-resolved[1460]: Positive Trust Anchors: Jan 13 20:33:20.443648 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:33:20.443690 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:33:20.454331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:20.455719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:20.455928 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.458048 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:20.458472 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:20.458902 systemd-resolved[1460]: Defaulting to hostname 'linux'. Jan 13 20:33:20.468735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:33:20.470816 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:33:20.475296 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:33:20.479888 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:33:20.483603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:20.483873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:20.485786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:20.486096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:20.491169 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:20.491474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:20.507294 systemd[1]: Reached target network.target - Network. Jan 13 20:33:20.510545 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:33:20.515287 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:33:20.517083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.531271 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:20.532734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:33:20.538289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:33:20.547162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:33:20.555348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:33:20.572399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:33:20.573887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:33:20.574156 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:33:20.574322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:33:20.574445 augenrules[1518]: /sbin/augenrules: No change Jan 13 20:33:20.581527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:33:20.581838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:33:20.584233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:33:20.584492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:33:20.587420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:33:20.587680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:33:20.589809 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:33:20.590115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:33:20.594002 augenrules[1543]: No rules Jan 13 20:33:20.595932 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:20.596559 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:20.598734 systemd[1]: Finished ensure-sysext.service. Jan 13 20:33:20.610359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:33:20.610460 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:33:20.626297 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:33:20.723921 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:33:21.467482 systemd-resolved[1460]: Clock change detected. Flushing caches. Jan 13 20:33:21.467529 systemd-timesyncd[1557]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:33:21.467575 systemd-timesyncd[1557]: Initial clock synchronization to Mon 2025-01-13 20:33:21.467425 UTC. Jan 13 20:33:21.469021 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:33:21.470502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:33:21.472087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:33:21.473609 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:33:21.475330 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:33:21.475369 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:33:21.476534 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:33:21.478057 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:33:21.479528 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:33:21.481023 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:33:21.482923 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:33:21.487036 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:33:21.490188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:33:21.493521 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:33:21.494874 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:33:21.496026 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:33:21.497352 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:33:21.497403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:33:21.497428 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:33:21.499594 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:33:21.502577 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:33:21.505131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:33:21.509897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:33:21.521978 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:33:21.523332 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:33:21.528935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:21.534003 jq[1565]: false Jan 13 20:33:21.534699 dbus-daemon[1563]: [system] SELinux support is enabled Jan 13 20:33:21.539015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:33:21.541731 extend-filesystems[1567]: Found loop3 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found loop4 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found loop5 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found sr0 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda1 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda2 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda3 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found usr Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda4 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda6 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda7 Jan 13 20:33:21.541731 extend-filesystems[1567]: Found vda9 Jan 13 20:33:21.541731 extend-filesystems[1567]: Checking size of /dev/vda9 Jan 13 20:33:21.553210 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:33:21.557939 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:33:21.562938 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:33:21.565152 extend-filesystems[1567]: Resized partition /dev/vda9 Jan 13 20:33:21.565944 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:33:21.571325 extend-filesystems[1590]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:33:21.579144 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:33:21.577724 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:33:21.586381 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1247) Jan 13 20:33:21.586087 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:33:21.590522 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:33:21.596679 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:33:21.601865 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:33:21.606254 jq[1600]: true Jan 13 20:33:21.612844 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:33:21.613486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:33:21.614072 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:33:21.643701 update_engine[1599]: I20250113 20:33:21.614722 1599 main.cc:92] Flatcar Update Engine starting Jan 13 20:33:21.643701 update_engine[1599]: I20250113 20:33:21.622403 1599 update_check_scheduler.cc:74] Next update check in 4m58s Jan 13 20:33:21.617282 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:33:21.617700 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:33:21.620788 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:33:21.624773 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:33:21.644616 jq[1609]: true Jan 13 20:33:21.625489 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:33:21.656512 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:33:21.656980 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:33:21.661577 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:33:21.661577 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:33:21.661577 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:33:21.668441 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Jan 13 20:33:21.664381 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:33:21.665871 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:33:21.667250 (ntainerd)[1611]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:33:21.678089 systemd-logind[1591]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:33:21.678112 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:33:21.679475 systemd-logind[1591]: New seat seat0. Jan 13 20:33:21.692726 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:33:21.697757 tar[1608]: linux-amd64/helm Jan 13 20:33:21.706371 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:33:21.711180 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:33:21.711616 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:33:21.711747 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:33:21.713184 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:33:21.713291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:33:21.716675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:33:21.723092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:33:21.735001 bash[1646]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:33:21.737911 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:33:21.740983 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:33:21.801252 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:33:21.915782 containerd[1611]: time="2025-01-13T20:33:21.915686592Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:33:21.946346 containerd[1611]: time="2025-01-13T20:33:21.946208361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948662374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948690787Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948705374Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948914236Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948929605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.948991791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.949002992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.949241810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.949255406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.949268260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949593 containerd[1611]: time="2025-01-13T20:33:21.949278078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949368057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949591706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949750194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949762046Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949891288Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:33:21.949956 containerd[1611]: time="2025-01-13T20:33:21.949948826Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:33:21.956249 containerd[1611]: time="2025-01-13T20:33:21.956223824Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:33:21.956300 containerd[1611]: time="2025-01-13T20:33:21.956261845Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:33:21.956300 containerd[1611]: time="2025-01-13T20:33:21.956277024Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:33:21.956300 containerd[1611]: time="2025-01-13T20:33:21.956291341Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:33:21.956394 containerd[1611]: time="2025-01-13T20:33:21.956303714Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:33:21.956445 containerd[1611]: time="2025-01-13T20:33:21.956423989Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:33:21.958744 containerd[1611]: time="2025-01-13T20:33:21.958713022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:33:21.958971 containerd[1611]: time="2025-01-13T20:33:21.958947702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:33:21.959010 containerd[1611]: time="2025-01-13T20:33:21.958970204Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:33:21.959010 containerd[1611]: time="2025-01-13T20:33:21.958999700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:33:21.959079 containerd[1611]: time="2025-01-13T20:33:21.959013686Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959079 containerd[1611]: time="2025-01-13T20:33:21.959026109Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959079 containerd[1611]: time="2025-01-13T20:33:21.959037160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959079 containerd[1611]: time="2025-01-13T20:33:21.959050074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959079 containerd[1611]: time="2025-01-13T20:33:21.959078708Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959092724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959105358Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959116689Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959149511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959162856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959184336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959211 containerd[1611]: time="2025-01-13T20:33:21.959198002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959210705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959238528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959249558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959261651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959273423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959310282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959323036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959334407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959346660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959360326Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959394290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959407344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959423 containerd[1611]: time="2025-01-13T20:33:21.959418195Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959476684Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959494518Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959504907Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959516449Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959540003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959551945Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959562335Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:33:21.959781 containerd[1611]: time="2025-01-13T20:33:21.959590468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.960005446Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.960074545Z" level=info msg="Connect containerd service" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.960119570Z" level=info msg="using legacy CRI server" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.960126813Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.960233644Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.961021391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.961148369Z" level=info msg="Start subscribing containerd event" Jan 13 20:33:21.961188 containerd[1611]: time="2025-01-13T20:33:21.961184918Z" level=info msg="Start recovering state" Jan 13 20:33:21.961853 containerd[1611]: time="2025-01-13T20:33:21.961256963Z" level=info msg="Start event monitor" Jan 13 20:33:21.961853 containerd[1611]: time="2025-01-13T20:33:21.961292229Z" level=info msg="Start snapshots syncer" Jan 13 20:33:21.961853 containerd[1611]: time="2025-01-13T20:33:21.961300795Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:33:21.961853 containerd[1611]: time="2025-01-13T20:33:21.961308520Z" level=info msg="Start streaming server" Jan 13 20:33:21.961853 containerd[1611]: time="2025-01-13T20:33:21.961835217Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:33:21.961998 containerd[1611]: time="2025-01-13T20:33:21.961910639Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:33:21.963391 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:33:21.965139 containerd[1611]: time="2025-01-13T20:33:21.964751386Z" level=info msg="containerd successfully booted in 0.050377s" Jan 13 20:33:22.157919 tar[1608]: linux-amd64/LICENSE Jan 13 20:33:22.158010 tar[1608]: linux-amd64/README.md Jan 13 20:33:22.172396 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:33:22.329970 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:33:22.331710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:22.336376 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:22.353393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:33:22.366058 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:33:22.375633 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:33:22.376031 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:33:22.389208 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:33:22.405485 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:33:22.414279 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:33:22.416991 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:33:22.418399 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:33:22.419551 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:33:22.420756 systemd[1]: Startup finished in 6.837s (kernel) + 4.854s (userspace) = 11.691s. Jan 13 20:33:22.824848 kubelet[1678]: E0113 20:33:22.824651 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:22.830300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:22.830587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:30.348125 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:33:30.360068 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:46836.service - OpenSSH per-connection server daemon (10.0.0.1:46836). Jan 13 20:33:30.418032 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 46836 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:30.420645 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:30.430640 systemd-logind[1591]: New session 1 of user core. Jan 13 20:33:30.432172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:33:30.445032 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:33:30.458742 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:33:30.476220 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:33:30.479267 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:33:30.587891 systemd[1717]: Queued start job for default target default.target. Jan 13 20:33:30.588310 systemd[1717]: Created slice app.slice - User Application Slice. Jan 13 20:33:30.588327 systemd[1717]: Reached target paths.target - Paths. Jan 13 20:33:30.588340 systemd[1717]: Reached target timers.target - Timers. Jan 13 20:33:30.599877 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:33:30.606880 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:33:30.606947 systemd[1717]: Reached target sockets.target - Sockets. Jan 13 20:33:30.606961 systemd[1717]: Reached target basic.target - Basic System. Jan 13 20:33:30.606995 systemd[1717]: Reached target default.target - Main User Target. Jan 13 20:33:30.607025 systemd[1717]: Startup finished in 121ms. Jan 13 20:33:30.607710 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:33:30.609274 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:33:30.666028 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:46846.service - OpenSSH per-connection server daemon (10.0.0.1:46846). Jan 13 20:33:30.708222 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 46846 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:30.709666 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:30.713459 systemd-logind[1591]: New session 2 of user core. Jan 13 20:33:30.725046 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:33:30.778401 sshd[1732]: Connection closed by 10.0.0.1 port 46846 Jan 13 20:33:30.778840 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:30.791043 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). Jan 13 20:33:30.791848 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:46846.service: Deactivated successfully. Jan 13 20:33:30.793735 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:33:30.795612 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:33:30.796597 systemd-logind[1591]: Removed session 2. Jan 13 20:33:30.825977 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:30.827411 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:30.831481 systemd-logind[1591]: New session 3 of user core. Jan 13 20:33:30.841097 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:33:30.891569 sshd[1740]: Connection closed by 10.0.0.1 port 46854 Jan 13 20:33:30.892013 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:30.901008 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:46858.service - OpenSSH per-connection server daemon (10.0.0.1:46858). Jan 13 20:33:30.901451 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:46854.service: Deactivated successfully. Jan 13 20:33:30.904072 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:33:30.905031 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:33:30.906059 systemd-logind[1591]: Removed session 3. Jan 13 20:33:30.932229 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 46858 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:30.933834 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:30.938240 systemd-logind[1591]: New session 4 of user core. Jan 13 20:33:30.949202 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:33:31.004386 sshd[1748]: Connection closed by 10.0.0.1 port 46858 Jan 13 20:33:31.004840 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:31.022173 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Jan 13 20:33:31.022745 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:46858.service: Deactivated successfully. Jan 13 20:33:31.024714 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:33:31.025526 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:33:31.026969 systemd-logind[1591]: Removed session 4. Jan 13 20:33:31.053727 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:31.055204 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:31.059486 systemd-logind[1591]: New session 5 of user core. Jan 13 20:33:31.069172 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:33:31.130387 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:33:31.130871 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:31.160708 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:31.162461 sshd[1756]: Connection closed by 10.0.0.1 port 46874 Jan 13 20:33:31.162902 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:31.177225 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:46890.service - OpenSSH per-connection server daemon (10.0.0.1:46890). Jan 13 20:33:31.178419 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:46874.service: Deactivated successfully. Jan 13 20:33:31.181468 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:33:31.183889 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:33:31.185588 systemd-logind[1591]: Removed session 5. Jan 13 20:33:31.212466 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 46890 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:31.213925 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:31.218287 systemd-logind[1591]: New session 6 of user core. Jan 13 20:33:31.228042 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:33:31.281827 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:33:31.282228 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:31.286304 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:31.293335 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:33:31.293710 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:31.313244 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:31.346589 augenrules[1789]: No rules Jan 13 20:33:31.348540 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:31.348902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:31.350338 sudo[1766]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:31.353464 sshd[1765]: Connection closed by 10.0.0.1 port 46890 Jan 13 20:33:31.352321 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:31.361185 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Jan 13 20:33:31.361728 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:46890.service: Deactivated successfully. Jan 13 20:33:31.364413 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:33:31.365395 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:33:31.366479 systemd-logind[1591]: Removed session 6. Jan 13 20:33:31.393110 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:33:31.394756 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:31.399418 systemd-logind[1591]: New session 7 of user core. Jan 13 20:33:31.417161 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:33:31.472666 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:33:31.473083 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:32.176182 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:33:32.176498 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:33:32.693101 dockerd[1822]: time="2025-01-13T20:33:32.693037302Z" level=info msg="Starting up" Jan 13 20:33:32.976608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:33:32.988970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:33.462429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:33.467240 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:33.793416 kubelet[1858]: E0113 20:33:33.793276 1858 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:33.801394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:33.801723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:33.947658 dockerd[1822]: time="2025-01-13T20:33:33.947607729Z" level=info msg="Loading containers: start." Jan 13 20:33:34.129840 kernel: Initializing XFRM netlink socket Jan 13 20:33:34.222914 systemd-networkd[1246]: docker0: Link UP Jan 13 20:33:34.261329 dockerd[1822]: time="2025-01-13T20:33:34.261284074Z" level=info msg="Loading containers: done." Jan 13 20:33:34.281953 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1215552142-merged.mount: Deactivated successfully. Jan 13 20:33:34.283778 dockerd[1822]: time="2025-01-13T20:33:34.283726021Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:33:34.283912 dockerd[1822]: time="2025-01-13T20:33:34.283861695Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:33:34.284016 dockerd[1822]: time="2025-01-13T20:33:34.283991268Z" level=info msg="Daemon has completed initialization" Jan 13 20:33:34.335123 dockerd[1822]: time="2025-01-13T20:33:34.335056233Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:33:34.335298 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:33:35.163540 containerd[1611]: time="2025-01-13T20:33:35.163486446Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:33:35.776179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799532124.mount: Deactivated successfully. Jan 13 20:33:37.112781 containerd[1611]: time="2025-01-13T20:33:37.112710166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:37.113515 containerd[1611]: time="2025-01-13T20:33:37.113453650Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:33:37.114619 containerd[1611]: time="2025-01-13T20:33:37.114539296Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:37.117738 containerd[1611]: time="2025-01-13T20:33:37.117693291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:37.118853 containerd[1611]: time="2025-01-13T20:33:37.118816538Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.955265571s" Jan 13 20:33:37.118905 containerd[1611]: time="2025-01-13T20:33:37.118857675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:33:37.147492 containerd[1611]: time="2025-01-13T20:33:37.147445908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:33:39.463546 containerd[1611]: time="2025-01-13T20:33:39.463477674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:39.464246 containerd[1611]: time="2025-01-13T20:33:39.464199628Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:33:39.465434 containerd[1611]: time="2025-01-13T20:33:39.465397525Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:39.468709 containerd[1611]: time="2025-01-13T20:33:39.468648010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:39.469730 containerd[1611]: time="2025-01-13T20:33:39.469692048Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.32220285s" Jan 13 20:33:39.469813 containerd[1611]: time="2025-01-13T20:33:39.469729689Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:33:39.493824 containerd[1611]: time="2025-01-13T20:33:39.493739135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:33:41.334173 containerd[1611]: time="2025-01-13T20:33:41.334114296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:41.349852 containerd[1611]: time="2025-01-13T20:33:41.349782940Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:33:41.352025 containerd[1611]: time="2025-01-13T20:33:41.351989979Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:41.354404 containerd[1611]: time="2025-01-13T20:33:41.354375432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:41.355450 containerd[1611]: time="2025-01-13T20:33:41.355415343Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.861632826s" Jan 13 20:33:41.355502 containerd[1611]: time="2025-01-13T20:33:41.355453484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:33:41.376820 containerd[1611]: time="2025-01-13T20:33:41.376768087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:33:42.555309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount312815110.mount: Deactivated successfully. Jan 13 20:33:43.193928 containerd[1611]: time="2025-01-13T20:33:43.193867697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:43.194685 containerd[1611]: time="2025-01-13T20:33:43.194647720Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:33:43.197141 containerd[1611]: time="2025-01-13T20:33:43.196579002Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:43.198973 containerd[1611]: time="2025-01-13T20:33:43.198935431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:43.199514 containerd[1611]: time="2025-01-13T20:33:43.199482497Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.822664767s" Jan 13 20:33:43.199514 containerd[1611]: time="2025-01-13T20:33:43.199509027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:33:43.220353 containerd[1611]: time="2025-01-13T20:33:43.220315556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:33:43.976648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:33:43.994941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:44.276207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.347199 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:45.055438 kubelet[2156]: E0113 20:33:45.055381 2156 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:45.059989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:45.060316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:45.929447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135799510.mount: Deactivated successfully. Jan 13 20:33:50.927312 containerd[1611]: time="2025-01-13T20:33:50.927243036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:50.933819 containerd[1611]: time="2025-01-13T20:33:50.933731514Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:33:50.940339 containerd[1611]: time="2025-01-13T20:33:50.940299822Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:50.953607 containerd[1611]: time="2025-01-13T20:33:50.953547827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:50.954721 containerd[1611]: time="2025-01-13T20:33:50.954676694Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 7.73432517s" Jan 13 20:33:50.954757 containerd[1611]: time="2025-01-13T20:33:50.954719224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:33:50.975461 containerd[1611]: time="2025-01-13T20:33:50.975420897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:33:51.588973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299857440.mount: Deactivated successfully. Jan 13 20:33:51.596692 containerd[1611]: time="2025-01-13T20:33:51.596644693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:51.597365 containerd[1611]: time="2025-01-13T20:33:51.597314359Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:33:51.598447 containerd[1611]: time="2025-01-13T20:33:51.598420824Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:51.600460 containerd[1611]: time="2025-01-13T20:33:51.600436604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:51.601265 containerd[1611]: time="2025-01-13T20:33:51.601224242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 625.76366ms" Jan 13 20:33:51.601304 containerd[1611]: time="2025-01-13T20:33:51.601264507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:33:51.621816 containerd[1611]: time="2025-01-13T20:33:51.621757889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:33:52.137564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253034186.mount: Deactivated successfully. Jan 13 20:33:54.155746 containerd[1611]: time="2025-01-13T20:33:54.155631128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:54.157757 containerd[1611]: time="2025-01-13T20:33:54.157687687Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:33:54.159452 containerd[1611]: time="2025-01-13T20:33:54.159411804Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:54.162544 containerd[1611]: time="2025-01-13T20:33:54.162494033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:54.163687 containerd[1611]: time="2025-01-13T20:33:54.163653701Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.541859744s" Jan 13 20:33:54.163725 containerd[1611]: time="2025-01-13T20:33:54.163688258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:33:55.226835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:33:55.245102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:55.379992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:55.383719 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:55.420990 kubelet[2359]: E0113 20:33:55.420921 2359 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:55.426224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:55.426499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:56.384321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:56.398133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:56.416256 systemd[1]: Reloading requested from client PID 2377 ('systemctl') (unit session-7.scope)... Jan 13 20:33:56.416271 systemd[1]: Reloading... Jan 13 20:33:56.506851 zram_generator::config[2428]: No configuration found. Jan 13 20:33:57.892724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:33:57.967314 systemd[1]: Reloading finished in 1550 ms. Jan 13 20:33:58.024285 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:33:58.024445 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:33:58.024896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:58.027220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:58.168395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:58.173355 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:33:58.218226 kubelet[2477]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:33:58.218226 kubelet[2477]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:33:58.218226 kubelet[2477]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:33:58.218697 kubelet[2477]: I0113 20:33:58.218271 2477 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:33:58.519818 kubelet[2477]: I0113 20:33:58.519761 2477 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:33:58.519818 kubelet[2477]: I0113 20:33:58.519818 2477 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:33:58.520089 kubelet[2477]: I0113 20:33:58.520060 2477 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:33:58.536314 kubelet[2477]: E0113 20:33:58.536271 2477 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.536951 kubelet[2477]: I0113 20:33:58.536926 2477 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:33:58.546962 kubelet[2477]: I0113 20:33:58.546926 2477 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:33:58.547440 kubelet[2477]: I0113 20:33:58.547402 2477 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:33:58.547648 kubelet[2477]: I0113 20:33:58.547611 2477 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:33:58.547768 kubelet[2477]: I0113 20:33:58.547648 2477 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:33:58.547768 kubelet[2477]: I0113 20:33:58.547665 2477 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:33:58.547865 kubelet[2477]: I0113 20:33:58.547818 2477 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:33:58.547957 kubelet[2477]: I0113 20:33:58.547930 2477 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:33:58.547957 kubelet[2477]: I0113 20:33:58.547951 2477 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:33:58.548012 kubelet[2477]: I0113 20:33:58.547984 2477 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:33:58.548012 kubelet[2477]: I0113 20:33:58.548005 2477 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:33:58.550921 kubelet[2477]: W0113 20:33:58.549020 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.550921 kubelet[2477]: E0113 20:33:58.549086 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.550921 kubelet[2477]: W0113 20:33:58.549160 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.550921 kubelet[2477]: E0113 20:33:58.549198 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.551838 kubelet[2477]: I0113 20:33:58.551724 2477 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:33:58.555182 kubelet[2477]: I0113 20:33:58.555120 2477 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:33:58.556226 kubelet[2477]: W0113 20:33:58.556190 2477 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:33:58.557106 kubelet[2477]: I0113 20:33:58.557064 2477 server.go:1256] "Started kubelet" Jan 13 20:33:58.557341 kubelet[2477]: I0113 20:33:58.557303 2477 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:33:58.557669 kubelet[2477]: I0113 20:33:58.557642 2477 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:33:58.557716 kubelet[2477]: I0113 20:33:58.557695 2477 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:33:58.559384 kubelet[2477]: I0113 20:33:58.558475 2477 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:33:58.559384 kubelet[2477]: I0113 20:33:58.558513 2477 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:33:58.560161 kubelet[2477]: E0113 20:33:58.559904 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:33:58.560161 kubelet[2477]: I0113 20:33:58.559945 2477 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:33:58.560161 kubelet[2477]: I0113 20:33:58.560017 2477 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:33:58.560248 kubelet[2477]: I0113 20:33:58.560060 2477 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:33:58.560594 kubelet[2477]: W0113 20:33:58.560534 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.560594 kubelet[2477]: E0113 20:33:58.560580 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.562847 kubelet[2477]: E0113 20:33:58.562434 2477 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5acc01ef27c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:33:58.557026245 +0000 UTC m=+0.379555556,LastTimestamp:2025-01-13 20:33:58.557026245 +0000 UTC m=+0.379555556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:33:58.563185 kubelet[2477]: E0113 20:33:58.563165 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Jan 13 20:33:58.563516 kubelet[2477]: I0113 20:33:58.563462 2477 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:33:58.563719 kubelet[2477]: E0113 20:33:58.563701 2477 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:33:58.565868 kubelet[2477]: I0113 20:33:58.565782 2477 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:33:58.565868 kubelet[2477]: I0113 20:33:58.565809 2477 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:33:58.580345 kubelet[2477]: I0113 20:33:58.580142 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:33:58.581888 kubelet[2477]: I0113 20:33:58.581839 2477 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:33:58.581888 kubelet[2477]: I0113 20:33:58.581877 2477 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:33:58.581888 kubelet[2477]: I0113 20:33:58.581898 2477 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:33:58.582109 kubelet[2477]: E0113 20:33:58.581951 2477 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:33:58.584750 kubelet[2477]: W0113 20:33:58.584696 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.584750 kubelet[2477]: E0113 20:33:58.584748 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:58.589185 kubelet[2477]: I0113 20:33:58.589147 2477 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:33:58.589185 kubelet[2477]: I0113 20:33:58.589180 2477 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:33:58.589326 kubelet[2477]: I0113 20:33:58.589205 2477 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:33:58.661743 kubelet[2477]: I0113 20:33:58.661697 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:33:58.662212 kubelet[2477]: E0113 20:33:58.662173 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 13 20:33:58.682481 kubelet[2477]: E0113 20:33:58.682426 2477 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:33:58.764448 kubelet[2477]: E0113 20:33:58.764389 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Jan 13 20:33:58.864419 kubelet[2477]: I0113 20:33:58.864300 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:33:58.864736 kubelet[2477]: E0113 20:33:58.864611 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 13 20:33:58.882996 kubelet[2477]: E0113 20:33:58.882888 2477 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:33:59.165362 kubelet[2477]: E0113 20:33:59.165215 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Jan 13 20:33:59.266867 kubelet[2477]: I0113 20:33:59.266822 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:33:59.267364 kubelet[2477]: E0113 20:33:59.267249 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 13 20:33:59.283537 kubelet[2477]: E0113 20:33:59.283462 2477 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:33:59.434430 kubelet[2477]: I0113 20:33:59.434224 2477 policy_none.go:49] "None policy: Start" Jan 13 20:33:59.435208 kubelet[2477]: I0113 20:33:59.435168 2477 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:33:59.435208 kubelet[2477]: I0113 20:33:59.435207 2477 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:33:59.703812 kubelet[2477]: W0113 20:33:59.703634 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.703812 kubelet[2477]: E0113 20:33:59.703698 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.840955 kubelet[2477]: W0113 20:33:59.840899 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.840955 kubelet[2477]: E0113 20:33:59.840955 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.936722 kubelet[2477]: W0113 20:33:59.936644 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.936863 kubelet[2477]: E0113 20:33:59.936731 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.966263 kubelet[2477]: E0113 20:33:59.966188 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Jan 13 20:33:59.980554 kubelet[2477]: W0113 20:33:59.980508 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:33:59.980605 kubelet[2477]: E0113 20:33:59.980559 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:00.069149 kubelet[2477]: I0113 20:34:00.069119 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:34:00.069432 kubelet[2477]: E0113 20:34:00.069405 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 13 20:34:00.084659 kubelet[2477]: E0113 20:34:00.084601 2477 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:34:00.343912 kubelet[2477]: I0113 20:34:00.343871 2477 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:00.344352 kubelet[2477]: I0113 20:34:00.344160 2477 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:00.345819 kubelet[2477]: E0113 20:34:00.345778 2477 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:34:00.711373 kubelet[2477]: E0113 20:34:00.711247 2477 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:01.566889 kubelet[2477]: E0113 20:34:01.566844 2477 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="3.2s" Jan 13 20:34:01.671770 kubelet[2477]: I0113 20:34:01.671714 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:34:01.672151 kubelet[2477]: E0113 20:34:01.672096 2477 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Jan 13 20:34:01.685561 kubelet[2477]: I0113 20:34:01.685504 2477 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:34:01.687216 kubelet[2477]: I0113 20:34:01.686685 2477 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:34:01.687754 kubelet[2477]: I0113 20:34:01.687716 2477 topology_manager.go:215] "Topology Admit Handler" podUID="d45589b8c3d4af34f1f4c850035d7735" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:34:01.777396 kubelet[2477]: I0113 20:34:01.777343 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:01.777396 kubelet[2477]: I0113 20:34:01.777394 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:01.777396 kubelet[2477]: I0113 20:34:01.777416 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:01.777588 kubelet[2477]: I0113 20:34:01.777438 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:34:01.777588 kubelet[2477]: I0113 20:34:01.777506 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:01.777588 kubelet[2477]: I0113 20:34:01.777563 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:01.777654 kubelet[2477]: I0113 20:34:01.777598 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:01.777654 kubelet[2477]: I0113 20:34:01.777616 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:01.777654 kubelet[2477]: I0113 20:34:01.777635 2477 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:01.811913 kubelet[2477]: W0113 20:34:01.811869 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:01.811913 kubelet[2477]: E0113 20:34:01.811915 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:01.993145 kubelet[2477]: E0113 20:34:01.993070 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:01.993891 containerd[1611]: time="2025-01-13T20:34:01.993837388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:01.994376 containerd[1611]: time="2025-01-13T20:34:01.994275836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:01.994410 kubelet[2477]: E0113 20:34:01.993925 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:01.996724 kubelet[2477]: E0113 20:34:01.996692 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:01.997013 containerd[1611]: time="2025-01-13T20:34:01.996977005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d45589b8c3d4af34f1f4c850035d7735,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:02.340678 kubelet[2477]: W0113 20:34:02.340526 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:02.340678 kubelet[2477]: E0113 20:34:02.340567 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:02.485464 kubelet[2477]: W0113 20:34:02.485379 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:02.485464 kubelet[2477]: E0113 20:34:02.485467 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:02.654580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975484061.mount: Deactivated successfully. Jan 13 20:34:02.663351 containerd[1611]: time="2025-01-13T20:34:02.663280321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:02.667717 containerd[1611]: time="2025-01-13T20:34:02.667642283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:34:02.668671 containerd[1611]: time="2025-01-13T20:34:02.668635577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:02.669770 containerd[1611]: time="2025-01-13T20:34:02.669744012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:02.670792 containerd[1611]: time="2025-01-13T20:34:02.670752747Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:02.671259 containerd[1611]: time="2025-01-13T20:34:02.671208005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:02.672691 containerd[1611]: time="2025-01-13T20:34:02.672659536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:02.674850 containerd[1611]: time="2025-01-13T20:34:02.674790561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:02.676701 containerd[1611]: time="2025-01-13T20:34:02.676666730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 682.711086ms" Jan 13 20:34:02.677414 containerd[1611]: time="2025-01-13T20:34:02.677381413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.344063ms" Jan 13 20:34:02.678166 containerd[1611]: time="2025-01-13T20:34:02.678108110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.772862ms" Jan 13 20:34:02.819047 containerd[1611]: time="2025-01-13T20:34:02.818670846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:02.819047 containerd[1611]: time="2025-01-13T20:34:02.818740198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:02.819047 containerd[1611]: time="2025-01-13T20:34:02.818760025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.819047 containerd[1611]: time="2025-01-13T20:34:02.818927676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.819047 containerd[1611]: time="2025-01-13T20:34:02.818972701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:02.819337 containerd[1611]: time="2025-01-13T20:34:02.819190397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.819366823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.821028363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.817630120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.820033846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.820047813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.823847 containerd[1611]: time="2025-01-13T20:34:02.820135760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:02.888024 containerd[1611]: time="2025-01-13T20:34:02.887981551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"215c5506d2f57867abf4f5da7c1465083953509d58fceb6549b972023f015fd7\"" Jan 13 20:34:02.891694 kubelet[2477]: E0113 20:34:02.891663 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:02.895094 containerd[1611]: time="2025-01-13T20:34:02.895066218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d45589b8c3d4af34f1f4c850035d7735,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c6f229680841983fa09f7b24a28c04ea2a66f2bb77d8ff71d2e811addcd519\"" Jan 13 20:34:02.895424 containerd[1611]: time="2025-01-13T20:34:02.895378564Z" level=info msg="CreateContainer within sandbox \"215c5506d2f57867abf4f5da7c1465083953509d58fceb6549b972023f015fd7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:34:02.896741 containerd[1611]: time="2025-01-13T20:34:02.896725013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6c0e83a557bb81ea95408f533249ed769a5c7a9384ba2865e75ef34e576e85f\"" Jan 13 20:34:02.897728 kubelet[2477]: E0113 20:34:02.897708 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:02.898232 kubelet[2477]: E0113 20:34:02.898217 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:02.899580 containerd[1611]: time="2025-01-13T20:34:02.899554280Z" level=info msg="CreateContainer within sandbox \"77c6f229680841983fa09f7b24a28c04ea2a66f2bb77d8ff71d2e811addcd519\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:34:02.900540 containerd[1611]: time="2025-01-13T20:34:02.900520805Z" level=info msg="CreateContainer within sandbox \"a6c0e83a557bb81ea95408f533249ed769a5c7a9384ba2865e75ef34e576e85f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:34:02.925238 containerd[1611]: time="2025-01-13T20:34:02.925064671Z" level=info msg="CreateContainer within sandbox \"215c5506d2f57867abf4f5da7c1465083953509d58fceb6549b972023f015fd7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"288544f48d9faf6bf74d208f5e7cd0c45f651504822a9d947b027f15453c6487\"" Jan 13 20:34:02.926769 containerd[1611]: time="2025-01-13T20:34:02.926741190Z" level=info msg="StartContainer for \"288544f48d9faf6bf74d208f5e7cd0c45f651504822a9d947b027f15453c6487\"" Jan 13 20:34:02.934099 containerd[1611]: time="2025-01-13T20:34:02.934062327Z" level=info msg="CreateContainer within sandbox \"a6c0e83a557bb81ea95408f533249ed769a5c7a9384ba2865e75ef34e576e85f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74182f85d78ca88511083d1276bfb695c8507d0716e46f561e9ddfa407504e47\"" Jan 13 20:34:02.934939 containerd[1611]: time="2025-01-13T20:34:02.934915054Z" level=info msg="StartContainer for \"74182f85d78ca88511083d1276bfb695c8507d0716e46f561e9ddfa407504e47\"" Jan 13 20:34:02.938056 containerd[1611]: time="2025-01-13T20:34:02.938022913Z" level=info msg="CreateContainer within sandbox \"77c6f229680841983fa09f7b24a28c04ea2a66f2bb77d8ff71d2e811addcd519\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cd848ae10b17afc9e2ff8ba661cbf25c6a62f3b628d0f92adacca2fe90a0383\"" Jan 13 20:34:02.938529 containerd[1611]: time="2025-01-13T20:34:02.938489674Z" level=info msg="StartContainer for \"7cd848ae10b17afc9e2ff8ba661cbf25c6a62f3b628d0f92adacca2fe90a0383\"" Jan 13 20:34:02.946475 kubelet[2477]: W0113 20:34:02.946436 2477 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:02.946475 kubelet[2477]: E0113 20:34:02.946482 2477 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Jan 13 20:34:03.036243 containerd[1611]: time="2025-01-13T20:34:03.036182679Z" level=info msg="StartContainer for \"288544f48d9faf6bf74d208f5e7cd0c45f651504822a9d947b027f15453c6487\" returns successfully" Jan 13 20:34:03.053255 containerd[1611]: time="2025-01-13T20:34:03.051231088Z" level=info msg="StartContainer for \"74182f85d78ca88511083d1276bfb695c8507d0716e46f561e9ddfa407504e47\" returns successfully" Jan 13 20:34:03.053255 containerd[1611]: time="2025-01-13T20:34:03.051231068Z" level=info msg="StartContainer for \"7cd848ae10b17afc9e2ff8ba661cbf25c6a62f3b628d0f92adacca2fe90a0383\" returns successfully" Jan 13 20:34:03.592468 kubelet[2477]: E0113 20:34:03.592439 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:03.595377 kubelet[2477]: E0113 20:34:03.595359 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:03.596939 kubelet[2477]: E0113 20:34:03.596922 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:04.410906 kubelet[2477]: E0113 20:34:04.410856 2477 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:34:04.598782 kubelet[2477]: E0113 20:34:04.598749 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:04.598965 kubelet[2477]: E0113 20:34:04.598846 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:04.599376 kubelet[2477]: E0113 20:34:04.599356 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:04.763740 kubelet[2477]: E0113 20:34:04.763690 2477 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:34:04.771321 kubelet[2477]: E0113 20:34:04.771260 2477 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:34:04.873870 kubelet[2477]: I0113 20:34:04.873840 2477 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:34:04.880719 kubelet[2477]: I0113 20:34:04.880675 2477 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:34:04.887243 kubelet[2477]: E0113 20:34:04.887207 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:04.987978 kubelet[2477]: E0113 20:34:04.987892 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.089094 kubelet[2477]: E0113 20:34:05.088792 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.190108 kubelet[2477]: E0113 20:34:05.189675 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.290438 kubelet[2477]: E0113 20:34:05.290396 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.391298 kubelet[2477]: E0113 20:34:05.391082 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.492307 kubelet[2477]: E0113 20:34:05.491893 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.592398 kubelet[2477]: E0113 20:34:05.592309 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.600753 kubelet[2477]: E0113 20:34:05.600724 2477 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:05.692966 kubelet[2477]: E0113 20:34:05.692771 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.793526 kubelet[2477]: E0113 20:34:05.793467 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.894367 kubelet[2477]: E0113 20:34:05.894332 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:05.995109 kubelet[2477]: E0113 20:34:05.995043 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:06.096206 kubelet[2477]: E0113 20:34:06.096136 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:06.196894 kubelet[2477]: E0113 20:34:06.196826 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:06.297615 kubelet[2477]: E0113 20:34:06.297470 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:06.398355 kubelet[2477]: E0113 20:34:06.398317 2477 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:34:06.555766 kubelet[2477]: I0113 20:34:06.555591 2477 apiserver.go:52] "Watching apiserver" Jan 13 20:34:06.560614 kubelet[2477]: I0113 20:34:06.560558 2477 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:34:06.755924 update_engine[1599]: I20250113 20:34:06.755841 1599 update_attempter.cc:509] Updating boot flags... Jan 13 20:34:06.788826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2766) Jan 13 20:34:06.825917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2770) Jan 13 20:34:07.776283 systemd[1]: Reloading requested from client PID 2774 ('systemctl') (unit session-7.scope)... Jan 13 20:34:07.776303 systemd[1]: Reloading... Jan 13 20:34:07.854822 zram_generator::config[2816]: No configuration found. Jan 13 20:34:07.966756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:08.044079 systemd[1]: Reloading finished in 267 ms. Jan 13 20:34:08.078753 kubelet[2477]: I0113 20:34:08.078676 2477 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:08.078739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:08.095080 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:34:08.095573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:08.106001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:08.236148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:08.242567 (kubelet)[2868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:34:08.300843 kubelet[2868]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:08.300843 kubelet[2868]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:34:08.300843 kubelet[2868]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:08.301310 kubelet[2868]: I0113 20:34:08.300855 2868 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:34:08.308512 kubelet[2868]: I0113 20:34:08.308330 2868 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:34:08.308512 kubelet[2868]: I0113 20:34:08.308374 2868 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:34:08.308869 kubelet[2868]: I0113 20:34:08.308708 2868 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:34:08.313677 kubelet[2868]: I0113 20:34:08.313631 2868 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:34:08.316513 kubelet[2868]: I0113 20:34:08.316493 2868 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:08.320608 sudo[2883]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:34:08.321049 sudo[2883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.325296 2868 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.325909 2868 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.326074 2868 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.326101 2868 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.326110 2868 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:34:08.326634 kubelet[2868]: I0113 20:34:08.326152 2868 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:08.327021 kubelet[2868]: I0113 20:34:08.326247 2868 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:34:08.327021 kubelet[2868]: I0113 20:34:08.326286 2868 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:34:08.327021 kubelet[2868]: I0113 20:34:08.326317 2868 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:34:08.331011 kubelet[2868]: I0113 20:34:08.330848 2868 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:34:08.333959 kubelet[2868]: I0113 20:34:08.333937 2868 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:34:08.334812 kubelet[2868]: I0113 20:34:08.334394 2868 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:34:08.334865 kubelet[2868]: I0113 20:34:08.334840 2868 server.go:1256] "Started kubelet" Jan 13 20:34:08.336560 kubelet[2868]: I0113 20:34:08.336539 2868 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:34:08.338527 kubelet[2868]: I0113 20:34:08.338500 2868 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:34:08.339993 kubelet[2868]: I0113 20:34:08.339969 2868 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:34:08.340176 kubelet[2868]: I0113 20:34:08.340157 2868 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:34:08.340437 kubelet[2868]: I0113 20:34:08.340419 2868 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:34:08.340724 kubelet[2868]: I0113 20:34:08.340707 2868 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:34:08.342476 kubelet[2868]: I0113 20:34:08.342235 2868 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:34:08.342476 kubelet[2868]: I0113 20:34:08.342450 2868 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:34:08.346163 kubelet[2868]: E0113 20:34:08.346132 2868 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:34:08.347160 kubelet[2868]: I0113 20:34:08.346311 2868 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:34:08.347160 kubelet[2868]: I0113 20:34:08.346396 2868 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:34:08.347590 kubelet[2868]: I0113 20:34:08.347571 2868 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:34:08.356420 kubelet[2868]: I0113 20:34:08.356388 2868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:34:08.358849 kubelet[2868]: I0113 20:34:08.358492 2868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:34:08.358849 kubelet[2868]: I0113 20:34:08.358524 2868 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:34:08.358849 kubelet[2868]: I0113 20:34:08.358543 2868 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:34:08.358849 kubelet[2868]: E0113 20:34:08.358616 2868 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407225 2868 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407248 2868 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407266 2868 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407415 2868 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407437 2868 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:34:08.407549 kubelet[2868]: I0113 20:34:08.407445 2868 policy_none.go:49] "None policy: Start" Jan 13 20:34:08.408825 kubelet[2868]: I0113 20:34:08.408321 2868 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:34:08.408825 kubelet[2868]: I0113 20:34:08.408363 2868 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:34:08.408825 kubelet[2868]: I0113 20:34:08.408546 2868 state_mem.go:75] "Updated machine memory state" Jan 13 20:34:08.411520 kubelet[2868]: I0113 20:34:08.410355 2868 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:08.411775 kubelet[2868]: I0113 20:34:08.411750 2868 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:08.444994 kubelet[2868]: I0113 20:34:08.444967 2868 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:34:08.451749 kubelet[2868]: I0113 20:34:08.451708 2868 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:34:08.451877 kubelet[2868]: I0113 20:34:08.451811 2868 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:34:08.459677 kubelet[2868]: I0113 20:34:08.459649 2868 topology_manager.go:215] "Topology Admit Handler" podUID="d45589b8c3d4af34f1f4c850035d7735" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:34:08.459759 kubelet[2868]: I0113 20:34:08.459732 2868 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:34:08.459787 kubelet[2868]: I0113 20:34:08.459765 2868 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:34:08.642369 kubelet[2868]: I0113 20:34:08.642253 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:34:08.642369 kubelet[2868]: I0113 20:34:08.642304 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:08.642369 kubelet[2868]: I0113 20:34:08.642336 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:08.642553 kubelet[2868]: I0113 20:34:08.642384 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:08.642553 kubelet[2868]: I0113 20:34:08.642435 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:08.642553 kubelet[2868]: I0113 20:34:08.642464 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d45589b8c3d4af34f1f4c850035d7735-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d45589b8c3d4af34f1f4c850035d7735\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:08.642553 kubelet[2868]: I0113 20:34:08.642504 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:08.642553 kubelet[2868]: I0113 20:34:08.642530 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:08.642711 kubelet[2868]: I0113 20:34:08.642566 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:34:08.768367 kubelet[2868]: E0113 20:34:08.768064 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:08.768367 kubelet[2868]: E0113 20:34:08.768102 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:08.768367 kubelet[2868]: E0113 20:34:08.768305 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:08.830965 sudo[2883]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:09.331428 kubelet[2868]: I0113 20:34:09.331377 2868 apiserver.go:52] "Watching apiserver" Jan 13 20:34:09.341162 kubelet[2868]: I0113 20:34:09.341124 2868 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:34:09.372166 kubelet[2868]: E0113 20:34:09.372126 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:09.372734 kubelet[2868]: E0113 20:34:09.372241 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:09.378248 kubelet[2868]: E0113 20:34:09.377831 2868 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:34:09.395237 kubelet[2868]: E0113 20:34:09.389898 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:09.429144 kubelet[2868]: I0113 20:34:09.429099 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.429036279 podStartE2EDuration="1.429036279s" podCreationTimestamp="2025-01-13 20:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:09.428946989 +0000 UTC m=+1.182082178" watchObservedRunningTime="2025-01-13 20:34:09.429036279 +0000 UTC m=+1.182171458" Jan 13 20:34:09.441309 kubelet[2868]: I0113 20:34:09.441224 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.441194942 podStartE2EDuration="1.441194942s" podCreationTimestamp="2025-01-13 20:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:09.436215862 +0000 UTC m=+1.189351041" watchObservedRunningTime="2025-01-13 20:34:09.441194942 +0000 UTC m=+1.194330121" Jan 13 20:34:09.441309 kubelet[2868]: I0113 20:34:09.441286 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4412721689999999 podStartE2EDuration="1.441272169s" podCreationTimestamp="2025-01-13 20:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:09.441171568 +0000 UTC m=+1.194306757" watchObservedRunningTime="2025-01-13 20:34:09.441272169 +0000 UTC m=+1.194407338" Jan 13 20:34:10.373351 kubelet[2868]: E0113 20:34:10.373327 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:10.470942 sudo[1802]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:10.472419 sshd[1801]: Connection closed by 10.0.0.1 port 46904 Jan 13 20:34:10.472824 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:10.476765 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:46904.service: Deactivated successfully. Jan 13 20:34:10.479169 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:34:10.479297 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:34:10.480381 systemd-logind[1591]: Removed session 7. Jan 13 20:34:11.375184 kubelet[2868]: E0113 20:34:11.375150 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:14.507459 kubelet[2868]: E0113 20:34:14.507044 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:15.381519 kubelet[2868]: E0113 20:34:15.381489 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:17.605984 kubelet[2868]: E0113 20:34:17.605932 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:18.384399 kubelet[2868]: E0113 20:34:18.384330 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:19.678669 kubelet[2868]: E0113 20:34:19.678626 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:21.955196 kubelet[2868]: I0113 20:34:21.955154 2868 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:34:21.955761 kubelet[2868]: I0113 20:34:21.955663 2868 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:34:21.955821 containerd[1611]: time="2025-01-13T20:34:21.955460624Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:34:22.456587 kubelet[2868]: I0113 20:34:22.456163 2868 topology_manager.go:215] "Topology Admit Handler" podUID="953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901" podNamespace="kube-system" podName="kube-proxy-5xmzj" Jan 13 20:34:22.464870 kubelet[2868]: I0113 20:34:22.462872 2868 topology_manager.go:215] "Topology Admit Handler" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" podNamespace="kube-system" podName="cilium-d4xlp" Jan 13 20:34:22.536095 kubelet[2868]: I0113 20:34:22.536026 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlk46\" (UniqueName: \"kubernetes.io/projected/953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901-kube-api-access-tlk46\") pod \"kube-proxy-5xmzj\" (UID: \"953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901\") " pod="kube-system/kube-proxy-5xmzj" Jan 13 20:34:22.536095 kubelet[2868]: I0113 20:34:22.536081 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-config-path\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536095 kubelet[2868]: I0113 20:34:22.536104 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-hostproc\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536451 kubelet[2868]: I0113 20:34:22.536289 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-bpf-maps\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536451 kubelet[2868]: I0113 20:34:22.536347 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-cgroup\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536451 kubelet[2868]: I0113 20:34:22.536365 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cni-path\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536571 kubelet[2868]: I0113 20:34:22.536464 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-kernel\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536571 kubelet[2868]: I0113 20:34:22.536512 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-hubble-tls\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536571 kubelet[2868]: I0113 20:34:22.536540 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-run\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536571 kubelet[2868]: I0113 20:34:22.536563 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwxrs\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-kube-api-access-fwxrs\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536711 kubelet[2868]: I0113 20:34:22.536586 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-xtables-lock\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536711 kubelet[2868]: I0113 20:34:22.536634 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901-xtables-lock\") pod \"kube-proxy-5xmzj\" (UID: \"953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901\") " pod="kube-system/kube-proxy-5xmzj" Jan 13 20:34:22.536711 kubelet[2868]: I0113 20:34:22.536664 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901-lib-modules\") pod \"kube-proxy-5xmzj\" (UID: \"953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901\") " pod="kube-system/kube-proxy-5xmzj" Jan 13 20:34:22.536711 kubelet[2868]: I0113 20:34:22.536689 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901-kube-proxy\") pod \"kube-proxy-5xmzj\" (UID: \"953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901\") " pod="kube-system/kube-proxy-5xmzj" Jan 13 20:34:22.536711 kubelet[2868]: I0113 20:34:22.536715 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-etc-cni-netd\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536931 kubelet[2868]: I0113 20:34:22.536749 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-lib-modules\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536931 kubelet[2868]: I0113 20:34:22.536785 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-net\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.536931 kubelet[2868]: I0113 20:34:22.536849 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94863509-97b6-4f9e-97b2-7df3a8beef84-clustermesh-secrets\") pod \"cilium-d4xlp\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " pod="kube-system/cilium-d4xlp" Jan 13 20:34:22.563033 kubelet[2868]: I0113 20:34:22.562984 2868 topology_manager.go:215] "Topology Admit Handler" podUID="234c5c45-b959-4288-9964-20f45bb4aa01" podNamespace="kube-system" podName="cilium-operator-5cc964979-sj6q9" Jan 13 20:34:22.638404 kubelet[2868]: I0113 20:34:22.638371 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/234c5c45-b959-4288-9964-20f45bb4aa01-cilium-config-path\") pod \"cilium-operator-5cc964979-sj6q9\" (UID: \"234c5c45-b959-4288-9964-20f45bb4aa01\") " pod="kube-system/cilium-operator-5cc964979-sj6q9" Jan 13 20:34:22.638404 kubelet[2868]: I0113 20:34:22.638417 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlgrg\" (UniqueName: \"kubernetes.io/projected/234c5c45-b959-4288-9964-20f45bb4aa01-kube-api-access-jlgrg\") pod \"cilium-operator-5cc964979-sj6q9\" (UID: \"234c5c45-b959-4288-9964-20f45bb4aa01\") " pod="kube-system/cilium-operator-5cc964979-sj6q9" Jan 13 20:34:22.765401 kubelet[2868]: E0113 20:34:22.765358 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:22.765998 containerd[1611]: time="2025-01-13T20:34:22.765964893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xmzj,Uid:953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:22.774351 kubelet[2868]: E0113 20:34:22.774315 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:22.774821 containerd[1611]: time="2025-01-13T20:34:22.774756519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4xlp,Uid:94863509-97b6-4f9e-97b2-7df3a8beef84,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:22.797251 containerd[1611]: time="2025-01-13T20:34:22.797161618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:22.797251 containerd[1611]: time="2025-01-13T20:34:22.797215589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:22.797251 containerd[1611]: time="2025-01-13T20:34:22.797231098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.799816 containerd[1611]: time="2025-01-13T20:34:22.797421928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.802969 containerd[1611]: time="2025-01-13T20:34:22.802890672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:22.803176 containerd[1611]: time="2025-01-13T20:34:22.803151744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:22.803590 containerd[1611]: time="2025-01-13T20:34:22.803562238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.803782 containerd[1611]: time="2025-01-13T20:34:22.803760582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.845839 containerd[1611]: time="2025-01-13T20:34:22.845500011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xmzj,Uid:953437e0-2e4e-4cb2-bfd9-d5c4a3fcb901,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcd3410df2861c3e6dc4bea52a3a44073d994c11ce97862b84ff98696e0bb1cd\"" Jan 13 20:34:22.846886 kubelet[2868]: E0113 20:34:22.846866 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:22.851762 containerd[1611]: time="2025-01-13T20:34:22.850859661Z" level=info msg="CreateContainer within sandbox \"dcd3410df2861c3e6dc4bea52a3a44073d994c11ce97862b84ff98696e0bb1cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:34:22.851964 containerd[1611]: time="2025-01-13T20:34:22.851910180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4xlp,Uid:94863509-97b6-4f9e-97b2-7df3a8beef84,Namespace:kube-system,Attempt:0,} returns sandbox id \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\"" Jan 13 20:34:22.852634 kubelet[2868]: E0113 20:34:22.852603 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:22.853581 containerd[1611]: time="2025-01-13T20:34:22.853548227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:34:22.871649 kubelet[2868]: E0113 20:34:22.871622 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:22.872088 containerd[1611]: time="2025-01-13T20:34:22.872052885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sj6q9,Uid:234c5c45-b959-4288-9964-20f45bb4aa01,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:22.873379 containerd[1611]: time="2025-01-13T20:34:22.873333518Z" level=info msg="CreateContainer within sandbox \"dcd3410df2861c3e6dc4bea52a3a44073d994c11ce97862b84ff98696e0bb1cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"183b9c049a808f5cffb73a7f7a5c0e31a7d717e954f08cd500b40f9173f92674\"" Jan 13 20:34:22.873924 containerd[1611]: time="2025-01-13T20:34:22.873807481Z" level=info msg="StartContainer for \"183b9c049a808f5cffb73a7f7a5c0e31a7d717e954f08cd500b40f9173f92674\"" Jan 13 20:34:22.899440 containerd[1611]: time="2025-01-13T20:34:22.899347055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:22.899440 containerd[1611]: time="2025-01-13T20:34:22.899409984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:22.899440 containerd[1611]: time="2025-01-13T20:34:22.899424221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.900312 containerd[1611]: time="2025-01-13T20:34:22.899691263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:22.947131 containerd[1611]: time="2025-01-13T20:34:22.942992997Z" level=info msg="StartContainer for \"183b9c049a808f5cffb73a7f7a5c0e31a7d717e954f08cd500b40f9173f92674\" returns successfully" Jan 13 20:34:22.955280 containerd[1611]: time="2025-01-13T20:34:22.955235887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sj6q9,Uid:234c5c45-b959-4288-9964-20f45bb4aa01,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\"" Jan 13 20:34:22.956021 kubelet[2868]: E0113 20:34:22.955999 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:23.394462 kubelet[2868]: E0113 20:34:23.394416 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:27.209021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821309249.mount: Deactivated successfully. Jan 13 20:34:28.476150 kubelet[2868]: I0113 20:34:28.476103 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5xmzj" podStartSLOduration=6.476062362 podStartE2EDuration="6.476062362s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:23.403438439 +0000 UTC m=+15.156573628" watchObservedRunningTime="2025-01-13 20:34:28.476062362 +0000 UTC m=+20.229197541" Jan 13 20:34:33.727460 containerd[1611]: time="2025-01-13T20:34:33.727384044Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:33.728368 containerd[1611]: time="2025-01-13T20:34:33.728295587Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734675" Jan 13 20:34:33.729334 containerd[1611]: time="2025-01-13T20:34:33.729296590Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:33.730980 containerd[1611]: time="2025-01-13T20:34:33.730956820Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.877371675s" Jan 13 20:34:33.731023 containerd[1611]: time="2025-01-13T20:34:33.730984312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:34:33.732448 containerd[1611]: time="2025-01-13T20:34:33.732420873Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:34:33.733760 containerd[1611]: time="2025-01-13T20:34:33.733675120Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:34:33.746626 containerd[1611]: time="2025-01-13T20:34:33.746572664Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\"" Jan 13 20:34:33.746886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453393687.mount: Deactivated successfully. Jan 13 20:34:33.748066 containerd[1611]: time="2025-01-13T20:34:33.748031857Z" level=info msg="StartContainer for \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\"" Jan 13 20:34:33.802075 containerd[1611]: time="2025-01-13T20:34:33.802039328Z" level=info msg="StartContainer for \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\" returns successfully" Jan 13 20:34:34.414117 kubelet[2868]: E0113 20:34:34.413725 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:34.744450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e-rootfs.mount: Deactivated successfully. Jan 13 20:34:34.854932 containerd[1611]: time="2025-01-13T20:34:34.854875036Z" level=info msg="shim disconnected" id=8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e namespace=k8s.io Jan 13 20:34:34.854932 containerd[1611]: time="2025-01-13T20:34:34.854923777Z" level=warning msg="cleaning up after shim disconnected" id=8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e namespace=k8s.io Jan 13 20:34:34.854932 containerd[1611]: time="2025-01-13T20:34:34.854933726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:35.416285 kubelet[2868]: E0113 20:34:35.416118 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:35.418416 containerd[1611]: time="2025-01-13T20:34:35.418252685Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:34:35.436826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790212184.mount: Deactivated successfully. Jan 13 20:34:35.438203 containerd[1611]: time="2025-01-13T20:34:35.438166487Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\"" Jan 13 20:34:35.438754 containerd[1611]: time="2025-01-13T20:34:35.438719216Z" level=info msg="StartContainer for \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\"" Jan 13 20:34:35.493158 containerd[1611]: time="2025-01-13T20:34:35.493118063Z" level=info msg="StartContainer for \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\" returns successfully" Jan 13 20:34:35.504789 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:34:35.505265 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:35.505478 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:35.512158 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:35.531719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:35.532562 containerd[1611]: time="2025-01-13T20:34:35.532100019Z" level=info msg="shim disconnected" id=d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3 namespace=k8s.io Jan 13 20:34:35.532562 containerd[1611]: time="2025-01-13T20:34:35.532146878Z" level=warning msg="cleaning up after shim disconnected" id=d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3 namespace=k8s.io Jan 13 20:34:35.532562 containerd[1611]: time="2025-01-13T20:34:35.532157277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:35.590165 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:40124.service - OpenSSH per-connection server daemon (10.0.0.1:40124). Jan 13 20:34:35.627566 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 40124 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:35.629373 sshd-session[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:35.633366 systemd-logind[1591]: New session 8 of user core. Jan 13 20:34:35.642081 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:34:35.744224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3-rootfs.mount: Deactivated successfully. Jan 13 20:34:35.766319 sshd[3389]: Connection closed by 10.0.0.1 port 40124 Jan 13 20:34:35.766678 sshd-session[3386]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:35.770637 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:40124.service: Deactivated successfully. Jan 13 20:34:35.772845 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:34:35.772865 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:34:35.774048 systemd-logind[1591]: Removed session 8. Jan 13 20:34:36.018667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911424742.mount: Deactivated successfully. Jan 13 20:34:36.327488 containerd[1611]: time="2025-01-13T20:34:36.327366702Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:36.328139 containerd[1611]: time="2025-01-13T20:34:36.328094730Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907177" Jan 13 20:34:36.329232 containerd[1611]: time="2025-01-13T20:34:36.329198234Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:36.332824 containerd[1611]: time="2025-01-13T20:34:36.331901663Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.599442198s" Jan 13 20:34:36.332824 containerd[1611]: time="2025-01-13T20:34:36.331947229Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:34:36.335031 containerd[1611]: time="2025-01-13T20:34:36.334981611Z" level=info msg="CreateContainer within sandbox \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:34:36.346016 containerd[1611]: time="2025-01-13T20:34:36.345975126Z" level=info msg="CreateContainer within sandbox \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\"" Jan 13 20:34:36.346355 containerd[1611]: time="2025-01-13T20:34:36.346331587Z" level=info msg="StartContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\"" Jan 13 20:34:36.400276 containerd[1611]: time="2025-01-13T20:34:36.400238586Z" level=info msg="StartContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" returns successfully" Jan 13 20:34:36.422882 kubelet[2868]: E0113 20:34:36.422389 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:36.425464 kubelet[2868]: E0113 20:34:36.425429 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:36.426170 containerd[1611]: time="2025-01-13T20:34:36.425993951Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:34:36.449577 containerd[1611]: time="2025-01-13T20:34:36.449432401Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\"" Jan 13 20:34:36.452097 containerd[1611]: time="2025-01-13T20:34:36.450361075Z" level=info msg="StartContainer for \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\"" Jan 13 20:34:36.455172 kubelet[2868]: I0113 20:34:36.455130 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-sj6q9" podStartSLOduration=1.0789813 podStartE2EDuration="14.45508943s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="2025-01-13 20:34:22.956848605 +0000 UTC m=+14.709983784" lastFinishedPulling="2025-01-13 20:34:36.332956735 +0000 UTC m=+28.086091914" observedRunningTime="2025-01-13 20:34:36.455076155 +0000 UTC m=+28.208211344" watchObservedRunningTime="2025-01-13 20:34:36.45508943 +0000 UTC m=+28.208224609" Jan 13 20:34:36.595021 containerd[1611]: time="2025-01-13T20:34:36.592868956Z" level=info msg="StartContainer for \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\" returns successfully" Jan 13 20:34:36.749182 containerd[1611]: time="2025-01-13T20:34:36.749119597Z" level=info msg="shim disconnected" id=9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8 namespace=k8s.io Jan 13 20:34:36.749182 containerd[1611]: time="2025-01-13T20:34:36.749171455Z" level=warning msg="cleaning up after shim disconnected" id=9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8 namespace=k8s.io Jan 13 20:34:36.749182 containerd[1611]: time="2025-01-13T20:34:36.749179500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:37.429709 kubelet[2868]: E0113 20:34:37.429657 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:37.430229 kubelet[2868]: E0113 20:34:37.429720 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:37.432065 containerd[1611]: time="2025-01-13T20:34:37.431998095Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:34:37.453332 containerd[1611]: time="2025-01-13T20:34:37.453281390Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\"" Jan 13 20:34:37.454211 containerd[1611]: time="2025-01-13T20:34:37.454178586Z" level=info msg="StartContainer for \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\"" Jan 13 20:34:37.531737 containerd[1611]: time="2025-01-13T20:34:37.531579053Z" level=info msg="StartContainer for \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\" returns successfully" Jan 13 20:34:37.550785 containerd[1611]: time="2025-01-13T20:34:37.550703142Z" level=info msg="shim disconnected" id=90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317 namespace=k8s.io Jan 13 20:34:37.550785 containerd[1611]: time="2025-01-13T20:34:37.550773474Z" level=warning msg="cleaning up after shim disconnected" id=90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317 namespace=k8s.io Jan 13 20:34:37.550785 containerd[1611]: time="2025-01-13T20:34:37.550781880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:37.744383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317-rootfs.mount: Deactivated successfully. Jan 13 20:34:38.432672 kubelet[2868]: E0113 20:34:38.432644 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:38.435039 containerd[1611]: time="2025-01-13T20:34:38.435006507Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:34:38.489736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760034843.mount: Deactivated successfully. Jan 13 20:34:38.490231 containerd[1611]: time="2025-01-13T20:34:38.490181145Z" level=info msg="CreateContainer within sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\"" Jan 13 20:34:38.490837 containerd[1611]: time="2025-01-13T20:34:38.490783076Z" level=info msg="StartContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\"" Jan 13 20:34:38.564604 containerd[1611]: time="2025-01-13T20:34:38.564552685Z" level=info msg="StartContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" returns successfully" Jan 13 20:34:38.749653 kubelet[2868]: I0113 20:34:38.749603 2868 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:34:38.780093 kubelet[2868]: I0113 20:34:38.779767 2868 topology_manager.go:215] "Topology Admit Handler" podUID="26845f0b-41a7-481c-892e-0cd6e192d879" podNamespace="kube-system" podName="coredns-76f75df574-csg5d" Jan 13 20:34:38.780628 kubelet[2868]: I0113 20:34:38.780489 2868 topology_manager.go:215] "Topology Admit Handler" podUID="949ff9f9-47af-4ed4-a54d-747969f1e184" podNamespace="kube-system" podName="coredns-76f75df574-c6pjd" Jan 13 20:34:38.967963 kubelet[2868]: I0113 20:34:38.967899 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949ff9f9-47af-4ed4-a54d-747969f1e184-config-volume\") pod \"coredns-76f75df574-c6pjd\" (UID: \"949ff9f9-47af-4ed4-a54d-747969f1e184\") " pod="kube-system/coredns-76f75df574-c6pjd" Jan 13 20:34:38.968084 kubelet[2868]: I0113 20:34:38.967977 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbxg\" (UniqueName: \"kubernetes.io/projected/949ff9f9-47af-4ed4-a54d-747969f1e184-kube-api-access-rkbxg\") pod \"coredns-76f75df574-c6pjd\" (UID: \"949ff9f9-47af-4ed4-a54d-747969f1e184\") " pod="kube-system/coredns-76f75df574-c6pjd" Jan 13 20:34:38.968084 kubelet[2868]: I0113 20:34:38.967998 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdtp\" (UniqueName: \"kubernetes.io/projected/26845f0b-41a7-481c-892e-0cd6e192d879-kube-api-access-wzdtp\") pod \"coredns-76f75df574-csg5d\" (UID: \"26845f0b-41a7-481c-892e-0cd6e192d879\") " pod="kube-system/coredns-76f75df574-csg5d" Jan 13 20:34:38.968084 kubelet[2868]: I0113 20:34:38.968017 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26845f0b-41a7-481c-892e-0cd6e192d879-config-volume\") pod \"coredns-76f75df574-csg5d\" (UID: \"26845f0b-41a7-481c-892e-0cd6e192d879\") " pod="kube-system/coredns-76f75df574-csg5d" Jan 13 20:34:39.087135 kubelet[2868]: E0113 20:34:39.086661 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:39.087135 kubelet[2868]: E0113 20:34:39.086769 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:39.087618 containerd[1611]: time="2025-01-13T20:34:39.087559497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-csg5d,Uid:26845f0b-41a7-481c-892e-0cd6e192d879,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:39.087704 containerd[1611]: time="2025-01-13T20:34:39.087578724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6pjd,Uid:949ff9f9-47af-4ed4-a54d-747969f1e184,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:39.436997 kubelet[2868]: E0113 20:34:39.436885 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:39.551286 kubelet[2868]: I0113 20:34:39.551012 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d4xlp" podStartSLOduration=6.672734061 podStartE2EDuration="17.550968168s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="2025-01-13 20:34:22.853163812 +0000 UTC m=+14.606298991" lastFinishedPulling="2025-01-13 20:34:33.731397919 +0000 UTC m=+25.484533098" observedRunningTime="2025-01-13 20:34:39.550752393 +0000 UTC m=+31.303887592" watchObservedRunningTime="2025-01-13 20:34:39.550968168 +0000 UTC m=+31.304103337" Jan 13 20:34:40.438561 kubelet[2868]: E0113 20:34:40.438532 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:40.700953 systemd-networkd[1246]: cilium_host: Link UP Jan 13 20:34:40.701169 systemd-networkd[1246]: cilium_net: Link UP Jan 13 20:34:40.701366 systemd-networkd[1246]: cilium_net: Gained carrier Jan 13 20:34:40.701537 systemd-networkd[1246]: cilium_host: Gained carrier Jan 13 20:34:40.776163 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:40128.service - OpenSSH per-connection server daemon (10.0.0.1:40128). Jan 13 20:34:40.815666 systemd-networkd[1246]: cilium_vxlan: Link UP Jan 13 20:34:40.815678 systemd-networkd[1246]: cilium_vxlan: Gained carrier Jan 13 20:34:40.826352 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 40128 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:40.826787 sshd-session[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:40.832357 systemd-logind[1591]: New session 9 of user core. Jan 13 20:34:40.839349 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:34:40.963782 sshd[3796]: Connection closed by 10.0.0.1 port 40128 Jan 13 20:34:40.965014 sshd-session[3762]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:40.970417 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:40128.service: Deactivated successfully. Jan 13 20:34:40.971599 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:34:40.975417 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:34:40.976740 systemd-logind[1591]: Removed session 9. Jan 13 20:34:41.048831 kernel: NET: Registered PF_ALG protocol family Jan 13 20:34:41.282892 systemd-networkd[1246]: cilium_host: Gained IPv6LL Jan 13 20:34:41.439864 kubelet[2868]: E0113 20:34:41.439837 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:41.686473 systemd-networkd[1246]: lxc_health: Link UP Jan 13 20:34:41.690975 systemd-networkd[1246]: lxc_health: Gained carrier Jan 13 20:34:41.731962 systemd-networkd[1246]: cilium_net: Gained IPv6LL Jan 13 20:34:41.985982 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Jan 13 20:34:42.172146 systemd-networkd[1246]: lxc60726f7b4029: Link UP Jan 13 20:34:42.182825 kernel: eth0: renamed from tmp3937f Jan 13 20:34:42.190093 systemd-networkd[1246]: lxc60726f7b4029: Gained carrier Jan 13 20:34:42.193667 systemd-networkd[1246]: lxc0b27c8e97b4f: Link UP Jan 13 20:34:42.203941 kernel: eth0: renamed from tmpb3182 Jan 13 20:34:42.210403 systemd-networkd[1246]: lxc0b27c8e97b4f: Gained carrier Jan 13 20:34:42.776457 kubelet[2868]: E0113 20:34:42.776360 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:43.443038 kubelet[2868]: E0113 20:34:43.442945 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:43.521956 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 13 20:34:44.034020 systemd-networkd[1246]: lxc0b27c8e97b4f: Gained IPv6LL Jan 13 20:34:44.225977 systemd-networkd[1246]: lxc60726f7b4029: Gained IPv6LL Jan 13 20:34:44.444468 kubelet[2868]: E0113 20:34:44.444351 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:45.738439 containerd[1611]: time="2025-01-13T20:34:45.738295824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:45.738439 containerd[1611]: time="2025-01-13T20:34:45.738383999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:45.738439 containerd[1611]: time="2025-01-13T20:34:45.738400209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:45.739051 containerd[1611]: time="2025-01-13T20:34:45.738998242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:45.770272 containerd[1611]: time="2025-01-13T20:34:45.768380103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:45.770272 containerd[1611]: time="2025-01-13T20:34:45.769184733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:45.770272 containerd[1611]: time="2025-01-13T20:34:45.769202837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:45.770272 containerd[1611]: time="2025-01-13T20:34:45.769295602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:45.772095 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:34:45.797646 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:34:45.805275 containerd[1611]: time="2025-01-13T20:34:45.805144874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-csg5d,Uid:26845f0b-41a7-481c-892e-0cd6e192d879,Namespace:kube-system,Attempt:0,} returns sandbox id \"3937fc32db76a4871224d32efd7534ce19749fb0b73ee95cc056e6ef548ff9cf\"" Jan 13 20:34:45.805917 kubelet[2868]: E0113 20:34:45.805888 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:45.808183 containerd[1611]: time="2025-01-13T20:34:45.807933039Z" level=info msg="CreateContainer within sandbox \"3937fc32db76a4871224d32efd7534ce19749fb0b73ee95cc056e6ef548ff9cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:34:45.828914 containerd[1611]: time="2025-01-13T20:34:45.828862322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-c6pjd,Uid:949ff9f9-47af-4ed4-a54d-747969f1e184,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31829eed6b4cf7a096426baaad66f5438ae56bdb44332c4328e20e62d48bdde\"" Jan 13 20:34:45.829812 kubelet[2868]: E0113 20:34:45.829659 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:45.831443 containerd[1611]: time="2025-01-13T20:34:45.831405797Z" level=info msg="CreateContainer within sandbox \"b31829eed6b4cf7a096426baaad66f5438ae56bdb44332c4328e20e62d48bdde\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:34:45.975144 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:39358.service - OpenSSH per-connection server daemon (10.0.0.1:39358). Jan 13 20:34:46.010472 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 39358 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:46.012098 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:46.016320 systemd-logind[1591]: New session 10 of user core. Jan 13 20:34:46.026075 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:34:46.244458 sshd[4191]: Connection closed by 10.0.0.1 port 39358 Jan 13 20:34:46.244816 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:46.249359 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:39358.service: Deactivated successfully. Jan 13 20:34:46.251768 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:34:46.252483 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:34:46.253365 systemd-logind[1591]: Removed session 10. Jan 13 20:34:46.318007 containerd[1611]: time="2025-01-13T20:34:46.317872360Z" level=info msg="CreateContainer within sandbox \"3937fc32db76a4871224d32efd7534ce19749fb0b73ee95cc056e6ef548ff9cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"177531b130d887c6446df0d991fce718a94a60e3bdfca806c8cc97142a593487\"" Jan 13 20:34:46.318528 containerd[1611]: time="2025-01-13T20:34:46.318487355Z" level=info msg="StartContainer for \"177531b130d887c6446df0d991fce718a94a60e3bdfca806c8cc97142a593487\"" Jan 13 20:34:46.352818 containerd[1611]: time="2025-01-13T20:34:46.352758371Z" level=info msg="CreateContainer within sandbox \"b31829eed6b4cf7a096426baaad66f5438ae56bdb44332c4328e20e62d48bdde\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"136f9aa8f3aad3654a2d122ef6fd06025a7ad1efa941c7df113f0ace8675b8ff\"" Jan 13 20:34:46.353460 containerd[1611]: time="2025-01-13T20:34:46.353411317Z" level=info msg="StartContainer for \"136f9aa8f3aad3654a2d122ef6fd06025a7ad1efa941c7df113f0ace8675b8ff\"" Jan 13 20:34:46.412317 containerd[1611]: time="2025-01-13T20:34:46.412264890Z" level=info msg="StartContainer for \"177531b130d887c6446df0d991fce718a94a60e3bdfca806c8cc97142a593487\" returns successfully" Jan 13 20:34:46.464782 containerd[1611]: time="2025-01-13T20:34:46.464737645Z" level=info msg="StartContainer for \"136f9aa8f3aad3654a2d122ef6fd06025a7ad1efa941c7df113f0ace8675b8ff\" returns successfully" Jan 13 20:34:46.469739 kubelet[2868]: E0113 20:34:46.469706 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:46.473366 kubelet[2868]: E0113 20:34:46.473336 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:46.495706 kubelet[2868]: I0113 20:34:46.495650 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-csg5d" podStartSLOduration=24.495611764 podStartE2EDuration="24.495611764s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:46.495219637 +0000 UTC m=+38.248354816" watchObservedRunningTime="2025-01-13 20:34:46.495611764 +0000 UTC m=+38.248746933" Jan 13 20:34:46.560019 kubelet[2868]: I0113 20:34:46.559971 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-c6pjd" podStartSLOduration=24.559925016 podStartE2EDuration="24.559925016s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:46.559717136 +0000 UTC m=+38.312852315" watchObservedRunningTime="2025-01-13 20:34:46.559925016 +0000 UTC m=+38.313060195" Jan 13 20:34:47.476152 kubelet[2868]: E0113 20:34:47.476100 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:47.476152 kubelet[2868]: E0113 20:34:47.476103 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:48.478074 kubelet[2868]: E0113 20:34:48.477958 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:48.478623 kubelet[2868]: E0113 20:34:48.478225 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:34:51.260116 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:39364.service - OpenSSH per-connection server daemon (10.0.0.1:39364). Jan 13 20:34:51.297385 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 39364 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:51.299227 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:51.303283 systemd-logind[1591]: New session 11 of user core. Jan 13 20:34:51.315160 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:34:51.434145 sshd[4294]: Connection closed by 10.0.0.1 port 39364 Jan 13 20:34:51.434515 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:51.442226 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:39378.service - OpenSSH per-connection server daemon (10.0.0.1:39378). Jan 13 20:34:51.442767 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:39364.service: Deactivated successfully. Jan 13 20:34:51.445054 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:34:51.446596 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:34:51.447583 systemd-logind[1591]: Removed session 11. Jan 13 20:34:51.474628 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 39378 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:51.476140 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:51.480264 systemd-logind[1591]: New session 12 of user core. Jan 13 20:34:51.489066 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:34:51.770614 sshd[4310]: Connection closed by 10.0.0.1 port 39378 Jan 13 20:34:51.769964 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:51.783263 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:39382.service - OpenSSH per-connection server daemon (10.0.0.1:39382). Jan 13 20:34:51.786122 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:39378.service: Deactivated successfully. Jan 13 20:34:51.790407 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:34:51.794542 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:34:51.797275 systemd-logind[1591]: Removed session 12. Jan 13 20:34:51.861640 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 39382 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:51.863503 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:51.880206 systemd-logind[1591]: New session 13 of user core. Jan 13 20:34:51.890046 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:34:52.123316 sshd[4323]: Connection closed by 10.0.0.1 port 39382 Jan 13 20:34:52.121639 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:52.127077 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:39382.service: Deactivated successfully. Jan 13 20:34:52.132912 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:34:52.134640 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:34:52.138137 systemd-logind[1591]: Removed session 13. Jan 13 20:34:57.138634 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:51658.service - OpenSSH per-connection server daemon (10.0.0.1:51658). Jan 13 20:34:57.207371 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 51658 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:34:57.208917 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:34:57.221151 systemd-logind[1591]: New session 14 of user core. Jan 13 20:34:57.236409 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:34:57.464618 sshd[4340]: Connection closed by 10.0.0.1 port 51658 Jan 13 20:34:57.465006 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:57.491966 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:51658.service: Deactivated successfully. Jan 13 20:34:57.499993 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:34:57.501356 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:34:57.505126 systemd-logind[1591]: Removed session 14. Jan 13 20:35:02.475992 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:51670.service - OpenSSH per-connection server daemon (10.0.0.1:51670). Jan 13 20:35:02.507361 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 51670 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:02.509002 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:02.513141 systemd-logind[1591]: New session 15 of user core. Jan 13 20:35:02.522161 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:35:02.656201 sshd[4356]: Connection closed by 10.0.0.1 port 51670 Jan 13 20:35:02.656659 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:02.661820 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:51670.service: Deactivated successfully. Jan 13 20:35:02.665021 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:35:02.665944 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:35:02.666913 systemd-logind[1591]: Removed session 15. Jan 13 20:35:07.671077 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:60020.service - OpenSSH per-connection server daemon (10.0.0.1:60020). Jan 13 20:35:07.706917 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 60020 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:07.709037 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:07.714011 systemd-logind[1591]: New session 16 of user core. Jan 13 20:35:07.736123 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:35:07.856500 sshd[4371]: Connection closed by 10.0.0.1 port 60020 Jan 13 20:35:07.856869 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:07.864097 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:60026.service - OpenSSH per-connection server daemon (10.0.0.1:60026). Jan 13 20:35:07.864729 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:60020.service: Deactivated successfully. Jan 13 20:35:07.867154 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:35:07.868901 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:35:07.870044 systemd-logind[1591]: Removed session 16. Jan 13 20:35:07.902401 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 60026 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:07.904249 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:07.908877 systemd-logind[1591]: New session 17 of user core. Jan 13 20:35:07.919188 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:35:08.157099 sshd[4386]: Connection closed by 10.0.0.1 port 60026 Jan 13 20:35:08.157531 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:08.164007 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:60042.service - OpenSSH per-connection server daemon (10.0.0.1:60042). Jan 13 20:35:08.164477 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:60026.service: Deactivated successfully. Jan 13 20:35:08.168234 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:35:08.169041 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:35:08.169865 systemd-logind[1591]: Removed session 17. Jan 13 20:35:08.202628 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 60042 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:08.204379 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:08.208540 systemd-logind[1591]: New session 18 of user core. Jan 13 20:35:08.216051 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:35:10.044899 sshd[4399]: Connection closed by 10.0.0.1 port 60042 Jan 13 20:35:10.045361 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:10.055567 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:60058.service - OpenSSH per-connection server daemon (10.0.0.1:60058). Jan 13 20:35:10.056379 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:60042.service: Deactivated successfully. Jan 13 20:35:10.060143 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:35:10.062260 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:35:10.063911 systemd-logind[1591]: Removed session 18. Jan 13 20:35:10.094877 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 60058 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:10.098264 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:10.103186 systemd-logind[1591]: New session 19 of user core. Jan 13 20:35:10.110060 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:35:10.360982 sshd[4421]: Connection closed by 10.0.0.1 port 60058 Jan 13 20:35:10.362946 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:10.373135 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:60066.service - OpenSSH per-connection server daemon (10.0.0.1:60066). Jan 13 20:35:10.373922 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:60058.service: Deactivated successfully. Jan 13 20:35:10.376233 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:35:10.379271 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:35:10.380825 systemd-logind[1591]: Removed session 19. Jan 13 20:35:10.406688 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 60066 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:10.408084 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:10.412135 systemd-logind[1591]: New session 20 of user core. Jan 13 20:35:10.422064 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:35:10.537734 sshd[4435]: Connection closed by 10.0.0.1 port 60066 Jan 13 20:35:10.538080 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:10.541823 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:60066.service: Deactivated successfully. Jan 13 20:35:10.544238 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:35:10.544351 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:35:10.545556 systemd-logind[1591]: Removed session 20. Jan 13 20:35:15.557336 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:60276.service - OpenSSH per-connection server daemon (10.0.0.1:60276). Jan 13 20:35:15.616217 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 60276 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:15.618123 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:15.624398 systemd-logind[1591]: New session 21 of user core. Jan 13 20:35:15.636545 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:35:15.793552 sshd[4450]: Connection closed by 10.0.0.1 port 60276 Jan 13 20:35:15.793961 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:15.800772 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:60276.service: Deactivated successfully. Jan 13 20:35:15.804115 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:35:15.807573 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:35:15.814487 systemd-logind[1591]: Removed session 21. Jan 13 20:35:20.810256 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Jan 13 20:35:20.845414 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:20.847227 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:20.851346 systemd-logind[1591]: New session 22 of user core. Jan 13 20:35:20.861082 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:35:21.049445 sshd[4468]: Connection closed by 10.0.0.1 port 60286 Jan 13 20:35:21.046517 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:21.051925 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:60286.service: Deactivated successfully. Jan 13 20:35:21.057383 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:35:21.058686 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:35:21.063230 systemd-logind[1591]: Removed session 22. Jan 13 20:35:26.060124 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:38292.service - OpenSSH per-connection server daemon (10.0.0.1:38292). Jan 13 20:35:26.093679 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 38292 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:26.095244 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:26.099273 systemd-logind[1591]: New session 23 of user core. Jan 13 20:35:26.109048 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:35:26.222366 sshd[4485]: Connection closed by 10.0.0.1 port 38292 Jan 13 20:35:26.222700 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:26.227466 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:38292.service: Deactivated successfully. Jan 13 20:35:26.230095 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:35:26.230143 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:35:26.231267 systemd-logind[1591]: Removed session 23. Jan 13 20:35:28.360402 kubelet[2868]: E0113 20:35:28.360326 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:31.245087 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:38302.service - OpenSSH per-connection server daemon (10.0.0.1:38302). Jan 13 20:35:31.306987 sshd[4497]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:31.309298 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:31.319684 systemd-logind[1591]: New session 24 of user core. Jan 13 20:35:31.338378 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:35:31.510266 sshd[4500]: Connection closed by 10.0.0.1 port 38302 Jan 13 20:35:31.508622 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:31.530303 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:38316.service - OpenSSH per-connection server daemon (10.0.0.1:38316). Jan 13 20:35:31.531060 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:38302.service: Deactivated successfully. Jan 13 20:35:31.534431 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:35:31.543491 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:35:31.545235 systemd-logind[1591]: Removed session 24. Jan 13 20:35:31.596264 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 38316 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:31.598374 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:31.609863 systemd-logind[1591]: New session 25 of user core. Jan 13 20:35:31.616259 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:35:33.447706 containerd[1611]: time="2025-01-13T20:35:33.447644270Z" level=info msg="StopContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" with timeout 30 (s)" Jan 13 20:35:33.467993 containerd[1611]: time="2025-01-13T20:35:33.467758870Z" level=info msg="Stop container \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" with signal terminated" Jan 13 20:35:33.490665 containerd[1611]: time="2025-01-13T20:35:33.490616456Z" level=info msg="StopContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" with timeout 2 (s)" Jan 13 20:35:33.490873 containerd[1611]: time="2025-01-13T20:35:33.490851031Z" level=info msg="Stop container \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" with signal terminated" Jan 13 20:35:33.496692 containerd[1611]: time="2025-01-13T20:35:33.496569906Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:35:33.501330 systemd-networkd[1246]: lxc_health: Link DOWN Jan 13 20:35:33.501340 systemd-networkd[1246]: lxc_health: Lost carrier Jan 13 20:35:33.527971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268-rootfs.mount: Deactivated successfully. Jan 13 20:35:33.542831 containerd[1611]: time="2025-01-13T20:35:33.542726094Z" level=info msg="shim disconnected" id=4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268 namespace=k8s.io Jan 13 20:35:33.542831 containerd[1611]: time="2025-01-13T20:35:33.542812869Z" level=warning msg="cleaning up after shim disconnected" id=4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268 namespace=k8s.io Jan 13 20:35:33.543059 containerd[1611]: time="2025-01-13T20:35:33.542850059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:33.560620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767-rootfs.mount: Deactivated successfully. Jan 13 20:35:33.574761 containerd[1611]: time="2025-01-13T20:35:33.574703830Z" level=info msg="StopContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" returns successfully" Jan 13 20:35:33.576304 containerd[1611]: time="2025-01-13T20:35:33.576215134Z" level=info msg="shim disconnected" id=4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767 namespace=k8s.io Jan 13 20:35:33.576304 containerd[1611]: time="2025-01-13T20:35:33.576271711Z" level=warning msg="cleaning up after shim disconnected" id=4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767 namespace=k8s.io Jan 13 20:35:33.576304 containerd[1611]: time="2025-01-13T20:35:33.576284615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:33.580030 containerd[1611]: time="2025-01-13T20:35:33.579951212Z" level=info msg="StopPodSandbox for \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\"" Jan 13 20:35:33.584382 containerd[1611]: time="2025-01-13T20:35:33.580029470Z" level=info msg="Container to stop \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.588990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1-shm.mount: Deactivated successfully. Jan 13 20:35:33.602911 containerd[1611]: time="2025-01-13T20:35:33.602862130Z" level=info msg="StopContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" returns successfully" Jan 13 20:35:33.602911 containerd[1611]: time="2025-01-13T20:35:33.603359773Z" level=info msg="StopPodSandbox for \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\"" Jan 13 20:35:33.602911 containerd[1611]: time="2025-01-13T20:35:33.603387395Z" level=info msg="Container to stop \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.603589 containerd[1611]: time="2025-01-13T20:35:33.603421981Z" level=info msg="Container to stop \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.603589 containerd[1611]: time="2025-01-13T20:35:33.603430587Z" level=info msg="Container to stop \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.603589 containerd[1611]: time="2025-01-13T20:35:33.603439413Z" level=info msg="Container to stop \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.603589 containerd[1611]: time="2025-01-13T20:35:33.603447208Z" level=info msg="Container to stop \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:35:33.605767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39-shm.mount: Deactivated successfully. Jan 13 20:35:33.625081 containerd[1611]: time="2025-01-13T20:35:33.625003719Z" level=info msg="shim disconnected" id=b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1 namespace=k8s.io Jan 13 20:35:33.625081 containerd[1611]: time="2025-01-13T20:35:33.625079003Z" level=warning msg="cleaning up after shim disconnected" id=b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1 namespace=k8s.io Jan 13 20:35:33.625081 containerd[1611]: time="2025-01-13T20:35:33.625091366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:33.641264 containerd[1611]: time="2025-01-13T20:35:33.641192361Z" level=info msg="shim disconnected" id=69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39 namespace=k8s.io Jan 13 20:35:33.641264 containerd[1611]: time="2025-01-13T20:35:33.641262995Z" level=warning msg="cleaning up after shim disconnected" id=69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39 namespace=k8s.io Jan 13 20:35:33.641481 containerd[1611]: time="2025-01-13T20:35:33.641273384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:33.642827 containerd[1611]: time="2025-01-13T20:35:33.642776934Z" level=info msg="TearDown network for sandbox \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\" successfully" Jan 13 20:35:33.642827 containerd[1611]: time="2025-01-13T20:35:33.642826296Z" level=info msg="StopPodSandbox for \"b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1\" returns successfully" Jan 13 20:35:33.659783 containerd[1611]: time="2025-01-13T20:35:33.659733530Z" level=info msg="TearDown network for sandbox \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" successfully" Jan 13 20:35:33.659783 containerd[1611]: time="2025-01-13T20:35:33.659771181Z" level=info msg="StopPodSandbox for \"69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39\" returns successfully" Jan 13 20:35:33.833117 kubelet[2868]: I0113 20:35:33.833029 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-net\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833117 kubelet[2868]: I0113 20:35:33.833072 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-cgroup\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833117 kubelet[2868]: I0113 20:35:33.833099 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/234c5c45-b959-4288-9964-20f45bb4aa01-cilium-config-path\") pod \"234c5c45-b959-4288-9964-20f45bb4aa01\" (UID: \"234c5c45-b959-4288-9964-20f45bb4aa01\") " Jan 13 20:35:33.833117 kubelet[2868]: I0113 20:35:33.833121 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-bpf-maps\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833117 kubelet[2868]: I0113 20:35:33.833139 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-run\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833155 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-xtables-lock\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833173 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cni-path\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833192 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-hubble-tls\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833186 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833228 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.833757 kubelet[2868]: I0113 20:35:33.833211 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94863509-97b6-4f9e-97b2-7df3a8beef84-clustermesh-secrets\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833304 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlgrg\" (UniqueName: \"kubernetes.io/projected/234c5c45-b959-4288-9964-20f45bb4aa01-kube-api-access-jlgrg\") pod \"234c5c45-b959-4288-9964-20f45bb4aa01\" (UID: \"234c5c45-b959-4288-9964-20f45bb4aa01\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833342 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-etc-cni-netd\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833363 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-lib-modules\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833385 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-kernel\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833408 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwxrs\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-kube-api-access-fwxrs\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836657 kubelet[2868]: I0113 20:35:33.833427 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-hostproc\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.833446 2868 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-config-path\") pod \"94863509-97b6-4f9e-97b2-7df3a8beef84\" (UID: \"94863509-97b6-4f9e-97b2-7df3a8beef84\") " Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.833494 2868 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.833506 2868 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.833908 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.833186 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.836791 kubelet[2868]: I0113 20:35:33.834053 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.836955 kubelet[2868]: I0113 20:35:33.834085 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cni-path" (OuterVolumeSpecName: "cni-path") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.836955 kubelet[2868]: I0113 20:35:33.836861 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:35:33.837004 kubelet[2868]: I0113 20:35:33.836957 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.837078 kubelet[2868]: I0113 20:35:33.837032 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234c5c45-b959-4288-9964-20f45bb4aa01-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "234c5c45-b959-4288-9964-20f45bb4aa01" (UID: "234c5c45-b959-4288-9964-20f45bb4aa01"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:35:33.837108 kubelet[2868]: I0113 20:35:33.837077 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.837108 kubelet[2868]: I0113 20:35:33.837101 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.837164 kubelet[2868]: I0113 20:35:33.837121 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-hostproc" (OuterVolumeSpecName: "hostproc") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:35:33.837853 kubelet[2868]: I0113 20:35:33.837827 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234c5c45-b959-4288-9964-20f45bb4aa01-kube-api-access-jlgrg" (OuterVolumeSpecName: "kube-api-access-jlgrg") pod "234c5c45-b959-4288-9964-20f45bb4aa01" (UID: "234c5c45-b959-4288-9964-20f45bb4aa01"). InnerVolumeSpecName "kube-api-access-jlgrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:35:33.838845 kubelet[2868]: I0113 20:35:33.838809 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94863509-97b6-4f9e-97b2-7df3a8beef84-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:35:33.838897 kubelet[2868]: I0113 20:35:33.838877 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:35:33.839895 kubelet[2868]: I0113 20:35:33.839872 2868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-kube-api-access-fwxrs" (OuterVolumeSpecName: "kube-api-access-fwxrs") pod "94863509-97b6-4f9e-97b2-7df3a8beef84" (UID: "94863509-97b6-4f9e-97b2-7df3a8beef84"). InnerVolumeSpecName "kube-api-access-fwxrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:35:33.934099 kubelet[2868]: I0113 20:35:33.934063 2868 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934099 kubelet[2868]: I0113 20:35:33.934093 2868 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934099 kubelet[2868]: I0113 20:35:33.934105 2868 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fwxrs\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-kube-api-access-fwxrs\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934099 kubelet[2868]: I0113 20:35:33.934118 2868 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934099 kubelet[2868]: I0113 20:35:33.934129 2868 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934139 2868 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934150 2868 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934162 2868 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934172 2868 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/234c5c45-b959-4288-9964-20f45bb4aa01-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934182 2868 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934192 2868 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94863509-97b6-4f9e-97b2-7df3a8beef84-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934202 2868 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94863509-97b6-4f9e-97b2-7df3a8beef84-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934370 kubelet[2868]: I0113 20:35:33.934213 2868 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94863509-97b6-4f9e-97b2-7df3a8beef84-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:33.934556 kubelet[2868]: I0113 20:35:33.934225 2868 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jlgrg\" (UniqueName: \"kubernetes.io/projected/234c5c45-b959-4288-9964-20f45bb4aa01-kube-api-access-jlgrg\") on node \"localhost\" DevicePath \"\"" Jan 13 20:35:34.441857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3b7bd4c78a600edc0da5aafad75e4b08efe9b32c3495fdbf43119964d914bb1-rootfs.mount: Deactivated successfully. Jan 13 20:35:34.442072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69c509f8204e9478312c00bd99d6a7542579f43be72e77aaff499416f1e8ff39-rootfs.mount: Deactivated successfully. Jan 13 20:35:34.442224 systemd[1]: var-lib-kubelet-pods-234c5c45\x2db959\x2d4288\x2d9964\x2d20f45bb4aa01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djlgrg.mount: Deactivated successfully. Jan 13 20:35:34.442383 systemd[1]: var-lib-kubelet-pods-94863509\x2d97b6\x2d4f9e\x2d97b2\x2d7df3a8beef84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfwxrs.mount: Deactivated successfully. Jan 13 20:35:34.442528 systemd[1]: var-lib-kubelet-pods-94863509\x2d97b6\x2d4f9e\x2d97b2\x2d7df3a8beef84-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:35:34.442671 systemd[1]: var-lib-kubelet-pods-94863509\x2d97b6\x2d4f9e\x2d97b2\x2d7df3a8beef84-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:35:34.588194 kubelet[2868]: I0113 20:35:34.588127 2868 scope.go:117] "RemoveContainer" containerID="4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268" Jan 13 20:35:34.595715 containerd[1611]: time="2025-01-13T20:35:34.595648930Z" level=info msg="RemoveContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\"" Jan 13 20:35:34.607832 containerd[1611]: time="2025-01-13T20:35:34.605493588Z" level=info msg="RemoveContainer for \"4c4e3f53f67174c6d6d63f79524ad49496bd4d5fc01fcf0ce6840c5514baf268\" returns successfully" Jan 13 20:35:34.607992 kubelet[2868]: I0113 20:35:34.606985 2868 scope.go:117] "RemoveContainer" containerID="4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767" Jan 13 20:35:34.609609 containerd[1611]: time="2025-01-13T20:35:34.609553688Z" level=info msg="RemoveContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\"" Jan 13 20:35:34.613845 containerd[1611]: time="2025-01-13T20:35:34.613779693Z" level=info msg="RemoveContainer for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" returns successfully" Jan 13 20:35:34.614066 kubelet[2868]: I0113 20:35:34.614037 2868 scope.go:117] "RemoveContainer" containerID="90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317" Jan 13 20:35:34.615527 containerd[1611]: time="2025-01-13T20:35:34.615230952Z" level=info msg="RemoveContainer for \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\"" Jan 13 20:35:34.618627 containerd[1611]: time="2025-01-13T20:35:34.618592590Z" level=info msg="RemoveContainer for \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\" returns successfully" Jan 13 20:35:34.618765 kubelet[2868]: I0113 20:35:34.618747 2868 scope.go:117] "RemoveContainer" containerID="9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8" Jan 13 20:35:34.619588 containerd[1611]: time="2025-01-13T20:35:34.619555193Z" level=info msg="RemoveContainer for \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\"" Jan 13 20:35:34.623080 containerd[1611]: time="2025-01-13T20:35:34.623051616Z" level=info msg="RemoveContainer for \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\" returns successfully" Jan 13 20:35:34.623227 kubelet[2868]: I0113 20:35:34.623195 2868 scope.go:117] "RemoveContainer" containerID="d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3" Jan 13 20:35:34.624149 containerd[1611]: time="2025-01-13T20:35:34.624116032Z" level=info msg="RemoveContainer for \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\"" Jan 13 20:35:34.627457 containerd[1611]: time="2025-01-13T20:35:34.627426864Z" level=info msg="RemoveContainer for \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\" returns successfully" Jan 13 20:35:34.627653 kubelet[2868]: I0113 20:35:34.627623 2868 scope.go:117] "RemoveContainer" containerID="8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e" Jan 13 20:35:34.628497 containerd[1611]: time="2025-01-13T20:35:34.628474207Z" level=info msg="RemoveContainer for \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\"" Jan 13 20:35:34.631443 containerd[1611]: time="2025-01-13T20:35:34.631410499Z" level=info msg="RemoveContainer for \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\" returns successfully" Jan 13 20:35:34.631593 kubelet[2868]: I0113 20:35:34.631561 2868 scope.go:117] "RemoveContainer" containerID="4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767" Jan 13 20:35:34.631753 containerd[1611]: time="2025-01-13T20:35:34.631721759Z" level=error msg="ContainerStatus for \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\": not found" Jan 13 20:35:34.638847 kubelet[2868]: E0113 20:35:34.638820 2868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\": not found" containerID="4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767" Jan 13 20:35:34.638927 kubelet[2868]: I0113 20:35:34.638913 2868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767"} err="failed to get container status \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\": rpc error: code = NotFound desc = an error occurred when try to find container \"4903aad350bec686ba8d1791bdd8985178c3f65bac4d1fb0368ed625c7403767\": not found" Jan 13 20:35:34.638960 kubelet[2868]: I0113 20:35:34.638929 2868 scope.go:117] "RemoveContainer" containerID="90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317" Jan 13 20:35:34.639214 containerd[1611]: time="2025-01-13T20:35:34.639170197Z" level=error msg="ContainerStatus for \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\": not found" Jan 13 20:35:34.639359 kubelet[2868]: E0113 20:35:34.639335 2868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\": not found" containerID="90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317" Jan 13 20:35:34.639408 kubelet[2868]: I0113 20:35:34.639361 2868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317"} err="failed to get container status \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\": rpc error: code = NotFound desc = an error occurred when try to find container \"90212acccf20e24af88e27881baa4322b566394dcf5d1987e46ae8aff6a46317\": not found" Jan 13 20:35:34.639408 kubelet[2868]: I0113 20:35:34.639369 2868 scope.go:117] "RemoveContainer" containerID="9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8" Jan 13 20:35:34.639521 containerd[1611]: time="2025-01-13T20:35:34.639492357Z" level=error msg="ContainerStatus for \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\": not found" Jan 13 20:35:34.639657 kubelet[2868]: E0113 20:35:34.639636 2868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\": not found" containerID="9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8" Jan 13 20:35:34.639692 kubelet[2868]: I0113 20:35:34.639669 2868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8"} err="failed to get container status \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e8acaf496266def8b1b9441c8b6da611c705aa23ed06c3b1f769d20aa1380a8\": not found" Jan 13 20:35:34.639692 kubelet[2868]: I0113 20:35:34.639685 2868 scope.go:117] "RemoveContainer" containerID="d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3" Jan 13 20:35:34.639855 containerd[1611]: time="2025-01-13T20:35:34.639827252Z" level=error msg="ContainerStatus for \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\": not found" Jan 13 20:35:34.639927 kubelet[2868]: E0113 20:35:34.639916 2868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\": not found" containerID="d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3" Jan 13 20:35:34.639963 kubelet[2868]: I0113 20:35:34.639939 2868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3"} err="failed to get container status \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d81e918478b0c58c523a98ab003c434a2fcc895f8f30e034118678af1d4a76b3\": not found" Jan 13 20:35:34.639963 kubelet[2868]: I0113 20:35:34.639950 2868 scope.go:117] "RemoveContainer" containerID="8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e" Jan 13 20:35:34.640100 containerd[1611]: time="2025-01-13T20:35:34.640073198Z" level=error msg="ContainerStatus for \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\": not found" Jan 13 20:35:34.640199 kubelet[2868]: E0113 20:35:34.640178 2868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\": not found" containerID="8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e" Jan 13 20:35:34.640233 kubelet[2868]: I0113 20:35:34.640208 2868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e"} err="failed to get container status \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c3973eea75c10d25bdd19f504ca5c03b5dc67d6516bcb4769dcfc6660888c8e\": not found" Jan 13 20:35:35.268628 sshd[4515]: Connection closed by 10.0.0.1 port 38316 Jan 13 20:35:35.269085 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:35.279076 systemd[1]: Started sshd@25-10.0.0.43:22-10.0.0.1:52348.service - OpenSSH per-connection server daemon (10.0.0.1:52348). Jan 13 20:35:35.279570 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:38316.service: Deactivated successfully. Jan 13 20:35:35.282671 systemd-logind[1591]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:35:35.282687 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:35:35.283991 systemd-logind[1591]: Removed session 25. Jan 13 20:35:35.317192 sshd[4676]: Accepted publickey for core from 10.0.0.1 port 52348 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:35.318692 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:35.322589 systemd-logind[1591]: New session 26 of user core. Jan 13 20:35:35.329061 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:35:35.710190 sshd[4682]: Connection closed by 10.0.0.1 port 52348 Jan 13 20:35:35.710022 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:35.720200 systemd[1]: Started sshd@26-10.0.0.43:22-10.0.0.1:52364.service - OpenSSH per-connection server daemon (10.0.0.1:52364). Jan 13 20:35:35.722054 kubelet[2868]: I0113 20:35:35.720687 2868 topology_manager.go:215] "Topology Admit Handler" podUID="1afd7b16-b339-47f7-b857-f8ae1bb23855" podNamespace="kube-system" podName="cilium-tnt96" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720747 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="234c5c45-b959-4288-9964-20f45bb4aa01" containerName="cilium-operator" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720758 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="clean-cilium-state" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720766 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="apply-sysctl-overwrites" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720774 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="mount-cgroup" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720781 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="mount-bpf-fs" Jan 13 20:35:35.722054 kubelet[2868]: E0113 20:35:35.720788 2868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="cilium-agent" Jan 13 20:35:35.722054 kubelet[2868]: I0113 20:35:35.720822 2868 memory_manager.go:354] "RemoveStaleState removing state" podUID="234c5c45-b959-4288-9964-20f45bb4aa01" containerName="cilium-operator" Jan 13 20:35:35.722054 kubelet[2868]: I0113 20:35:35.720830 2868 memory_manager.go:354] "RemoveStaleState removing state" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" containerName="cilium-agent" Jan 13 20:35:35.720713 systemd[1]: sshd@25-10.0.0.43:22-10.0.0.1:52348.service: Deactivated successfully. Jan 13 20:35:35.736209 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:35:35.740344 systemd-logind[1591]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:35:35.750235 systemd-logind[1591]: Removed session 26. Jan 13 20:35:35.775484 sshd[4690]: Accepted publickey for core from 10.0.0.1 port 52364 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:35.776991 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:35.781011 systemd-logind[1591]: New session 27 of user core. Jan 13 20:35:35.794071 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:35:35.843456 sshd[4696]: Connection closed by 10.0.0.1 port 52364 Jan 13 20:35:35.843820 sshd-session[4690]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844672 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-etc-cni-netd\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844718 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-bpf-maps\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844746 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-xtables-lock\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844770 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-host-proc-sys-kernel\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844812 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-cilium-run\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845260 kubelet[2868]: I0113 20:35:35.844838 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-lib-modules\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844863 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1afd7b16-b339-47f7-b857-f8ae1bb23855-cilium-config-path\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844886 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1afd7b16-b339-47f7-b857-f8ae1bb23855-cilium-ipsec-secrets\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844913 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-cilium-cgroup\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844937 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1afd7b16-b339-47f7-b857-f8ae1bb23855-hubble-tls\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844962 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtpp\" (UniqueName: \"kubernetes.io/projected/1afd7b16-b339-47f7-b857-f8ae1bb23855-kube-api-access-7qtpp\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845482 kubelet[2868]: I0113 20:35:35.844984 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-cni-path\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845656 kubelet[2868]: I0113 20:35:35.845008 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-host-proc-sys-net\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845656 kubelet[2868]: I0113 20:35:35.845029 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1afd7b16-b339-47f7-b857-f8ae1bb23855-hostproc\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.845656 kubelet[2868]: I0113 20:35:35.845052 2868 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1afd7b16-b339-47f7-b857-f8ae1bb23855-clustermesh-secrets\") pod \"cilium-tnt96\" (UID: \"1afd7b16-b339-47f7-b857-f8ae1bb23855\") " pod="kube-system/cilium-tnt96" Jan 13 20:35:35.851014 systemd[1]: Started sshd@27-10.0.0.43:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Jan 13 20:35:35.851477 systemd[1]: sshd@26-10.0.0.43:22-10.0.0.1:52364.service: Deactivated successfully. Jan 13 20:35:35.854196 systemd-logind[1591]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:35:35.855467 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:35:35.856726 systemd-logind[1591]: Removed session 27. Jan 13 20:35:35.883547 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:35:35.884941 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:35.888691 systemd-logind[1591]: New session 28 of user core. Jan 13 20:35:35.903262 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:35:36.044141 kubelet[2868]: E0113 20:35:36.044102 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:36.044646 containerd[1611]: time="2025-01-13T20:35:36.044615567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnt96,Uid:1afd7b16-b339-47f7-b857-f8ae1bb23855,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:36.064741 containerd[1611]: time="2025-01-13T20:35:36.064559758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:36.065424 containerd[1611]: time="2025-01-13T20:35:36.065222874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:36.065424 containerd[1611]: time="2025-01-13T20:35:36.065245036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:36.065424 containerd[1611]: time="2025-01-13T20:35:36.065350546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:36.104141 containerd[1611]: time="2025-01-13T20:35:36.104098148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tnt96,Uid:1afd7b16-b339-47f7-b857-f8ae1bb23855,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\"" Jan 13 20:35:36.104769 kubelet[2868]: E0113 20:35:36.104750 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:36.106401 containerd[1611]: time="2025-01-13T20:35:36.106366002Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:35:36.121855 containerd[1611]: time="2025-01-13T20:35:36.121814402Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"462a4a1bc670e5b6882f2472f019335da8cfddb00c109fdbd8d70e20b1e00e32\"" Jan 13 20:35:36.123432 containerd[1611]: time="2025-01-13T20:35:36.122422764Z" level=info msg="StartContainer for \"462a4a1bc670e5b6882f2472f019335da8cfddb00c109fdbd8d70e20b1e00e32\"" Jan 13 20:35:36.183086 containerd[1611]: time="2025-01-13T20:35:36.183033331Z" level=info msg="StartContainer for \"462a4a1bc670e5b6882f2472f019335da8cfddb00c109fdbd8d70e20b1e00e32\" returns successfully" Jan 13 20:35:36.224376 containerd[1611]: time="2025-01-13T20:35:36.224307938Z" level=info msg="shim disconnected" id=462a4a1bc670e5b6882f2472f019335da8cfddb00c109fdbd8d70e20b1e00e32 namespace=k8s.io Jan 13 20:35:36.224376 containerd[1611]: time="2025-01-13T20:35:36.224368393Z" level=warning msg="cleaning up after shim disconnected" id=462a4a1bc670e5b6882f2472f019335da8cfddb00c109fdbd8d70e20b1e00e32 namespace=k8s.io Jan 13 20:35:36.224376 containerd[1611]: time="2025-01-13T20:35:36.224379534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:36.361866 kubelet[2868]: I0113 20:35:36.361544 2868 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="234c5c45-b959-4288-9964-20f45bb4aa01" path="/var/lib/kubelet/pods/234c5c45-b959-4288-9964-20f45bb4aa01/volumes" Jan 13 20:35:36.362327 kubelet[2868]: I0113 20:35:36.362278 2868 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="94863509-97b6-4f9e-97b2-7df3a8beef84" path="/var/lib/kubelet/pods/94863509-97b6-4f9e-97b2-7df3a8beef84/volumes" Jan 13 20:35:36.597206 kubelet[2868]: E0113 20:35:36.597176 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:36.599101 containerd[1611]: time="2025-01-13T20:35:36.599064878Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:35:36.626821 containerd[1611]: time="2025-01-13T20:35:36.626616502Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7767f9210503263bd8381024aa47d92dbc2e00024d1eb65f40d4e0f68323cb2\"" Jan 13 20:35:36.627526 containerd[1611]: time="2025-01-13T20:35:36.627489706Z" level=info msg="StartContainer for \"e7767f9210503263bd8381024aa47d92dbc2e00024d1eb65f40d4e0f68323cb2\"" Jan 13 20:35:36.727849 containerd[1611]: time="2025-01-13T20:35:36.727765283Z" level=info msg="StartContainer for \"e7767f9210503263bd8381024aa47d92dbc2e00024d1eb65f40d4e0f68323cb2\" returns successfully" Jan 13 20:35:36.773460 containerd[1611]: time="2025-01-13T20:35:36.772943371Z" level=info msg="shim disconnected" id=e7767f9210503263bd8381024aa47d92dbc2e00024d1eb65f40d4e0f68323cb2 namespace=k8s.io Jan 13 20:35:36.773460 containerd[1611]: time="2025-01-13T20:35:36.773009887Z" level=warning msg="cleaning up after shim disconnected" id=e7767f9210503263bd8381024aa47d92dbc2e00024d1eb65f40d4e0f68323cb2 namespace=k8s.io Jan 13 20:35:36.773460 containerd[1611]: time="2025-01-13T20:35:36.773022140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:37.601171 kubelet[2868]: E0113 20:35:37.601137 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:37.603114 containerd[1611]: time="2025-01-13T20:35:37.602954768Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:35:37.622999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486845285.mount: Deactivated successfully. Jan 13 20:35:37.631111 containerd[1611]: time="2025-01-13T20:35:37.631058047Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a\"" Jan 13 20:35:37.631880 containerd[1611]: time="2025-01-13T20:35:37.631787057Z" level=info msg="StartContainer for \"c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a\"" Jan 13 20:35:37.716466 containerd[1611]: time="2025-01-13T20:35:37.716391189Z" level=info msg="StartContainer for \"c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a\" returns successfully" Jan 13 20:35:37.761706 containerd[1611]: time="2025-01-13T20:35:37.761610728Z" level=info msg="shim disconnected" id=c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a namespace=k8s.io Jan 13 20:35:37.761706 containerd[1611]: time="2025-01-13T20:35:37.761683877Z" level=warning msg="cleaning up after shim disconnected" id=c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a namespace=k8s.io Jan 13 20:35:37.761706 containerd[1611]: time="2025-01-13T20:35:37.761699696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:37.952216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f74e7d5bd8390a1fde4a7da2d9ef9d1871cbd8fafd0f35564e7a6dbf78657a-rootfs.mount: Deactivated successfully. Jan 13 20:35:38.361386 kubelet[2868]: E0113 20:35:38.360739 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:38.440327 kubelet[2868]: E0113 20:35:38.438021 2868 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:35:38.605596 kubelet[2868]: E0113 20:35:38.605555 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:38.608492 containerd[1611]: time="2025-01-13T20:35:38.608353544Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:35:38.635135 containerd[1611]: time="2025-01-13T20:35:38.634996008Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f\"" Jan 13 20:35:38.635731 containerd[1611]: time="2025-01-13T20:35:38.635674603Z" level=info msg="StartContainer for \"be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f\"" Jan 13 20:35:38.728181 containerd[1611]: time="2025-01-13T20:35:38.727701861Z" level=info msg="StartContainer for \"be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f\" returns successfully" Jan 13 20:35:38.782614 containerd[1611]: time="2025-01-13T20:35:38.782251700Z" level=info msg="shim disconnected" id=be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f namespace=k8s.io Jan 13 20:35:38.782614 containerd[1611]: time="2025-01-13T20:35:38.782332102Z" level=warning msg="cleaning up after shim disconnected" id=be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f namespace=k8s.io Jan 13 20:35:38.782614 containerd[1611]: time="2025-01-13T20:35:38.782343784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:35:38.802984 containerd[1611]: time="2025-01-13T20:35:38.802921188Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:35:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:35:38.952069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be58f09708234d2bc45b3fa4e5f9c5bacfa179954e9a87707b2f2eecbbc8497f-rootfs.mount: Deactivated successfully. Jan 13 20:35:39.611483 kubelet[2868]: E0113 20:35:39.611438 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:39.614458 containerd[1611]: time="2025-01-13T20:35:39.614412749Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:35:39.797196 containerd[1611]: time="2025-01-13T20:35:39.797108917Z" level=info msg="CreateContainer within sandbox \"c0f09b0da9f96af14ce61052bf38e52de18799cdd6b62e80060a3e7e2ddb8bcd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"284afaa12941b6bc653c796a039ffa25ef1aaadc52102aedb29e5cb0a5940fb5\"" Jan 13 20:35:39.797954 containerd[1611]: time="2025-01-13T20:35:39.797909833Z" level=info msg="StartContainer for \"284afaa12941b6bc653c796a039ffa25ef1aaadc52102aedb29e5cb0a5940fb5\"" Jan 13 20:35:39.917902 containerd[1611]: time="2025-01-13T20:35:39.917602766Z" level=info msg="StartContainer for \"284afaa12941b6bc653c796a039ffa25ef1aaadc52102aedb29e5cb0a5940fb5\" returns successfully" Jan 13 20:35:40.375706 kubelet[2868]: I0113 20:35:40.375660 2868 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:35:40Z","lastTransitionTime":"2025-01-13T20:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:35:40.391147 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:35:40.617462 kubelet[2868]: E0113 20:35:40.617424 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:42.045614 kubelet[2868]: E0113 20:35:42.045483 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:43.359786 kubelet[2868]: E0113 20:35:43.359735 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:43.661462 systemd-networkd[1246]: lxc_health: Link UP Jan 13 20:35:43.665844 systemd-networkd[1246]: lxc_health: Gained carrier Jan 13 20:35:44.047576 kubelet[2868]: E0113 20:35:44.047543 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:44.064896 kubelet[2868]: I0113 20:35:44.064861 2868 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tnt96" podStartSLOduration=9.064818234 podStartE2EDuration="9.064818234s" podCreationTimestamp="2025-01-13 20:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:40.654237282 +0000 UTC m=+92.407372461" watchObservedRunningTime="2025-01-13 20:35:44.064818234 +0000 UTC m=+95.817953433" Jan 13 20:35:44.627949 kubelet[2868]: E0113 20:35:44.627872 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:44.898009 systemd-networkd[1246]: lxc_health: Gained IPv6LL Jan 13 20:35:45.359268 kubelet[2868]: E0113 20:35:45.359229 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:45.630101 kubelet[2868]: E0113 20:35:45.629964 2868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:35:50.793238 sshd[4705]: Connection closed by 10.0.0.1 port 52378 Jan 13 20:35:50.793941 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:50.799922 systemd[1]: sshd@27-10.0.0.43:22-10.0.0.1:52378.service: Deactivated successfully. Jan 13 20:35:50.803064 systemd-logind[1591]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:35:50.803299 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:35:50.804731 systemd-logind[1591]: Removed session 28.