Jan 13 21:24:23.864788 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:24:23.864810 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:23.864821 kernel: BIOS-provided physical RAM map: Jan 13 21:24:23.864827 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:24:23.864833 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:24:23.864839 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:24:23.864846 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:24:23.864853 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:24:23.864859 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:24:23.864867 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:24:23.864900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:24:23.864906 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:24:23.864913 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:24:23.864919 kernel: NX (Execute Disable) protection: active Jan 13 21:24:23.864927 kernel: APIC: Static calls initialized Jan 13 21:24:23.864936 kernel: SMBIOS 2.8 present. Jan 13 21:24:23.864943 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:24:23.864950 kernel: Hypervisor detected: KVM Jan 13 21:24:23.864957 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:24:23.864963 kernel: kvm-clock: using sched offset of 2198897107 cycles Jan 13 21:24:23.864970 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:24:23.864978 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:24:23.864985 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:24:23.864992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:24:23.864999 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:24:23.865008 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:24:23.865015 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:24:23.865021 kernel: Using GB pages for direct mapping Jan 13 21:24:23.865028 kernel: ACPI: Early table checksum verification disabled Jan 13 21:24:23.865035 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:24:23.865042 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865049 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865056 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865065 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:24:23.865071 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865078 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865085 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865092 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:24:23.865098 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:24:23.865105 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:24:23.865116 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:24:23.865125 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:24:23.865132 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:24:23.865139 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:24:23.865146 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:24:23.865153 kernel: No NUMA configuration found Jan 13 21:24:23.865160 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:24:23.865167 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:24:23.865177 kernel: Zone ranges: Jan 13 21:24:23.865184 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:24:23.865191 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:24:23.865198 kernel: Normal empty Jan 13 21:24:23.865205 kernel: Movable zone start for each node Jan 13 21:24:23.865212 kernel: Early memory node ranges Jan 13 21:24:23.865219 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:24:23.865226 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:24:23.865233 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:24:23.865243 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:24:23.865250 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:24:23.865257 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:24:23.865264 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:24:23.865271 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:24:23.865278 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:24:23.865285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:24:23.865292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:24:23.865300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:24:23.865309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:24:23.865316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:24:23.865323 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:24:23.865330 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:24:23.865337 kernel: TSC deadline timer available Jan 13 21:24:23.865344 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:24:23.865352 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:24:23.865359 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:24:23.865366 kernel: kvm-guest: setup PV sched yield Jan 13 21:24:23.865373 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:24:23.865382 kernel: Booting paravirtualized kernel on KVM Jan 13 21:24:23.865390 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:24:23.865397 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:24:23.865404 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:24:23.865411 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:24:23.865418 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:24:23.865425 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:24:23.865432 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:24:23.865441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:23.865451 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:24:23.865458 kernel: random: crng init done Jan 13 21:24:23.865465 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:24:23.865472 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:24:23.865480 kernel: Fallback order for Node 0: 0 Jan 13 21:24:23.865487 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:24:23.865494 kernel: Policy zone: DMA32 Jan 13 21:24:23.865501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:24:23.865511 kernel: Memory: 2434596K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136896K reserved, 0K cma-reserved) Jan 13 21:24:23.865518 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:24:23.865525 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:24:23.865532 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:24:23.865539 kernel: Dynamic Preempt: voluntary Jan 13 21:24:23.865547 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:24:23.865554 kernel: rcu: RCU event tracing is enabled. Jan 13 21:24:23.865562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:24:23.865569 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:24:23.865579 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:24:23.865586 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:24:23.865593 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:24:23.865600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:24:23.865608 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:24:23.865615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:24:23.865622 kernel: Console: colour VGA+ 80x25 Jan 13 21:24:23.865629 kernel: printk: console [ttyS0] enabled Jan 13 21:24:23.865636 kernel: ACPI: Core revision 20230628 Jan 13 21:24:23.865646 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:24:23.865653 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:24:23.865660 kernel: x2apic enabled Jan 13 21:24:23.865667 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:24:23.865674 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:24:23.865681 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:24:23.865689 kernel: kvm-guest: setup PV IPIs Jan 13 21:24:23.865706 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:24:23.865713 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:24:23.865721 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:24:23.865728 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:24:23.865736 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:24:23.865745 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:24:23.865753 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:24:23.865760 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:24:23.865768 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:24:23.865776 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:24:23.865786 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:24:23.865793 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:24:23.865801 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:24:23.865808 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:24:23.865816 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:24:23.865824 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:24:23.865832 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:24:23.865840 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:24:23.865852 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:24:23.865860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:24:23.865879 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:24:23.865887 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:24:23.865908 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:24:23.865923 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:24:23.865931 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:24:23.865938 kernel: landlock: Up and running. Jan 13 21:24:23.865946 kernel: SELinux: Initializing. Jan 13 21:24:23.865956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:24:23.865964 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:24:23.865972 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:24:23.865979 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:23.865987 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:23.865994 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:24:23.866002 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:24:23.866009 kernel: ... version: 0 Jan 13 21:24:23.866017 kernel: ... bit width: 48 Jan 13 21:24:23.866027 kernel: ... generic registers: 6 Jan 13 21:24:23.866034 kernel: ... value mask: 0000ffffffffffff Jan 13 21:24:23.866042 kernel: ... max period: 00007fffffffffff Jan 13 21:24:23.866049 kernel: ... fixed-purpose events: 0 Jan 13 21:24:23.866056 kernel: ... event mask: 000000000000003f Jan 13 21:24:23.866064 kernel: signal: max sigframe size: 1776 Jan 13 21:24:23.866071 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:24:23.866079 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:24:23.866086 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:24:23.866096 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:24:23.866103 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:24:23.866111 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:24:23.866118 kernel: smpboot: Max logical packages: 1 Jan 13 21:24:23.866126 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:24:23.866133 kernel: devtmpfs: initialized Jan 13 21:24:23.866141 kernel: x86/mm: Memory block size: 128MB Jan 13 21:24:23.866148 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:24:23.866156 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:24:23.866166 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:24:23.866173 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:24:23.866180 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:24:23.866188 kernel: audit: type=2000 audit(1736803463.835:1): state=initialized audit_enabled=0 res=1 Jan 13 21:24:23.866195 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:24:23.866203 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:24:23.866210 kernel: cpuidle: using governor menu Jan 13 21:24:23.866218 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:24:23.866225 kernel: dca service started, version 1.12.1 Jan 13 21:24:23.866235 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:24:23.866243 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:24:23.866250 kernel: PCI: Using configuration type 1 for base access Jan 13 21:24:23.866258 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:24:23.866266 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:24:23.866273 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:24:23.866281 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:24:23.866288 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:24:23.866295 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:24:23.866305 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:24:23.866313 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:24:23.866320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:24:23.866328 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:24:23.866335 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:24:23.866342 kernel: ACPI: Interpreter enabled Jan 13 21:24:23.866350 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:24:23.866357 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:24:23.866365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:24:23.866375 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:24:23.866382 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:24:23.866390 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:24:23.866564 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:24:23.866695 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:24:23.866815 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:24:23.866824 kernel: PCI host bridge to bus 0000:00 Jan 13 21:24:23.866973 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:24:23.867085 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:24:23.867194 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:24:23.867302 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:24:23.867411 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:24:23.867519 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:24:23.867628 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:24:23.867768 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:24:23.867918 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:24:23.868040 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:24:23.868159 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:24:23.868278 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:24:23.868395 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:24:23.868526 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:24:23.868665 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:24:23.868785 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:24:23.868935 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:24:23.869062 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:24:23.869180 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:24:23.869299 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:24:23.869417 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:24:23.869549 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:24:23.869670 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:24:23.869788 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:24:23.869928 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:24:23.870049 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:24:23.870176 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:24:23.870299 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:24:23.870424 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:24:23.870542 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:24:23.870659 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:24:23.870783 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:24:23.870923 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:24:23.870934 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:24:23.870946 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:24:23.870954 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:24:23.870961 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:24:23.870969 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:24:23.870976 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:24:23.870984 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:24:23.870991 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:24:23.870999 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:24:23.871006 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:24:23.871016 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:24:23.871024 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:24:23.871031 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:24:23.871039 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:24:23.871046 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:24:23.871054 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:24:23.871061 kernel: iommu: Default domain type: Translated Jan 13 21:24:23.871069 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:24:23.871076 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:24:23.871086 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:24:23.871094 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:24:23.871101 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:24:23.871220 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:24:23.871338 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:24:23.871456 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:24:23.871466 kernel: vgaarb: loaded Jan 13 21:24:23.871474 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:24:23.871485 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:24:23.871492 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:24:23.871500 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:24:23.871507 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:24:23.871515 kernel: pnp: PnP ACPI init Jan 13 21:24:23.871643 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:24:23.871654 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:24:23.871662 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:24:23.871670 kernel: NET: Registered PF_INET protocol family Jan 13 21:24:23.871680 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:24:23.871688 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:24:23.871696 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:24:23.871703 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:24:23.871711 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:24:23.871718 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:24:23.871726 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:24:23.871734 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:24:23.871744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:24:23.871751 kernel: NET: Registered PF_XDP protocol family Jan 13 21:24:23.871862 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:24:23.871994 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:24:23.872104 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:24:23.872214 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:24:23.872323 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:24:23.872432 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:24:23.872442 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:24:23.872453 kernel: Initialise system trusted keyrings Jan 13 21:24:23.872461 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:24:23.872469 kernel: Key type asymmetric registered Jan 13 21:24:23.872476 kernel: Asymmetric key parser 'x509' registered Jan 13 21:24:23.872484 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:24:23.872491 kernel: io scheduler mq-deadline registered Jan 13 21:24:23.872499 kernel: io scheduler kyber registered Jan 13 21:24:23.872506 kernel: io scheduler bfq registered Jan 13 21:24:23.872514 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:24:23.872524 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:24:23.872532 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:24:23.872540 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:24:23.872547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:24:23.872555 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:24:23.872562 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:24:23.872570 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:24:23.872578 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:24:23.872729 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:24:23.872744 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:24:23.872859 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:24:23.873001 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:24:23 UTC (1736803463) Jan 13 21:24:23.873114 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:24:23.873124 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:24:23.873132 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:24:23.873139 kernel: Segment Routing with IPv6 Jan 13 21:24:23.873147 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:24:23.873158 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:24:23.873166 kernel: Key type dns_resolver registered Jan 13 21:24:23.873173 kernel: IPI shorthand broadcast: enabled Jan 13 21:24:23.873181 kernel: sched_clock: Marking stable (610002697, 105469014)->(729368015, -13896304) Jan 13 21:24:23.873188 kernel: registered taskstats version 1 Jan 13 21:24:23.873196 kernel: Loading compiled-in X.509 certificates Jan 13 21:24:23.873204 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:24:23.873211 kernel: Key type .fscrypt registered Jan 13 21:24:23.873218 kernel: Key type fscrypt-provisioning registered Jan 13 21:24:23.873228 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:24:23.873236 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:24:23.873243 kernel: ima: No architecture policies found Jan 13 21:24:23.873251 kernel: clk: Disabling unused clocks Jan 13 21:24:23.873258 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:24:23.873266 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:24:23.873274 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:24:23.873281 kernel: Run /init as init process Jan 13 21:24:23.873291 kernel: with arguments: Jan 13 21:24:23.873298 kernel: /init Jan 13 21:24:23.873305 kernel: with environment: Jan 13 21:24:23.873313 kernel: HOME=/ Jan 13 21:24:23.873320 kernel: TERM=linux Jan 13 21:24:23.873327 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:24:23.873337 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:23.873347 systemd[1]: Detected virtualization kvm. Jan 13 21:24:23.873357 systemd[1]: Detected architecture x86-64. Jan 13 21:24:23.873365 systemd[1]: Running in initrd. Jan 13 21:24:23.873373 systemd[1]: No hostname configured, using default hostname. Jan 13 21:24:23.873381 systemd[1]: Hostname set to . Jan 13 21:24:23.873389 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:24:23.873397 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:24:23.873405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:23.873413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:23.873424 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:24:23.873444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:23.873455 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:24:23.873464 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:24:23.873474 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:24:23.873484 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:24:23.873493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:23.873501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:23.873509 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:23.873518 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:23.873526 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:23.873534 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:23.873542 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:23.873553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:23.873561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:24:23.873570 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:24:23.873578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:23.873587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:23.873595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:23.873603 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:23.873612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:24:23.873620 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:23.873630 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:24:23.873639 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:24:23.873647 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:23.873655 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:23.873663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:23.873672 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:23.873680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:23.873688 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:24:23.873715 systemd-journald[192]: Collecting audit messages is disabled. Jan 13 21:24:23.873736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:24:23.873747 systemd-journald[192]: Journal started Jan 13 21:24:23.873765 systemd-journald[192]: Runtime Journal (/run/log/journal/08b5b488152446f29131325ef904d1f4) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:24:23.859292 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:24:23.893927 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:24:23.893950 kernel: Bridge firewalling registered Jan 13 21:24:23.893961 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:23.885988 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:24:23.894422 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:23.894913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:24:23.907028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:23.907686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:23.910648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:23.912885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:23.917156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:23.922223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:23.927403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:23.930606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:23.938027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:23.941865 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:23.944567 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:24:23.960780 dracut-cmdline[232]: dracut-dracut-053 Jan 13 21:24:23.964127 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:24:23.966864 systemd-resolved[224]: Positive Trust Anchors: Jan 13 21:24:23.966898 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:23.966929 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:23.969538 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 13 21:24:23.970520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:23.971293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:24.055906 kernel: SCSI subsystem initialized Jan 13 21:24:24.064907 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:24:24.075910 kernel: iscsi: registered transport (tcp) Jan 13 21:24:24.095971 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:24:24.095990 kernel: QLogic iSCSI HBA Driver Jan 13 21:24:24.144571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:24.149036 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:24:24.174190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:24:24.174217 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:24:24.175211 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:24:24.215899 kernel: raid6: avx2x4 gen() 30622 MB/s Jan 13 21:24:24.232892 kernel: raid6: avx2x2 gen() 31037 MB/s Jan 13 21:24:24.249962 kernel: raid6: avx2x1 gen() 26059 MB/s Jan 13 21:24:24.249977 kernel: raid6: using algorithm avx2x2 gen() 31037 MB/s Jan 13 21:24:24.278895 kernel: raid6: .... xor() 19912 MB/s, rmw enabled Jan 13 21:24:24.278916 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:24:24.298904 kernel: xor: automatically using best checksumming function avx Jan 13 21:24:24.452911 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:24:24.465227 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:24.480090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:24.518188 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 13 21:24:24.523632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:24.529998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:24:24.543684 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 21:24:24.578900 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:24.591996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:24.655187 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:24.665040 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:24:24.679451 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:24.681126 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:24.682655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:24.689603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:24.695898 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:24:24.712714 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:24:24.712729 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:24:24.712864 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:24:24.712896 kernel: GPT:9289727 != 19775487 Jan 13 21:24:24.712906 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:24:24.712922 kernel: GPT:9289727 != 19775487 Jan 13 21:24:24.712932 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:24:24.712942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:24.699064 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:24:24.709863 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:24.711769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:24.711831 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:24.722259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:24.723754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:24.723820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:24.738199 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:24:24.725315 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:24.739443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:24.748278 kernel: AES CTR mode by8 optimization enabled Jan 13 21:24:24.748306 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (464) Jan 13 21:24:24.751909 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (459) Jan 13 21:24:24.751936 kernel: libata version 3.00 loaded. Jan 13 21:24:24.758904 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:24:24.768767 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:24:24.768785 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:24:24.769052 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:24:24.769211 kernel: scsi host0: ahci Jan 13 21:24:24.769416 kernel: scsi host1: ahci Jan 13 21:24:24.769577 kernel: scsi host2: ahci Jan 13 21:24:24.769745 kernel: scsi host3: ahci Jan 13 21:24:24.770021 kernel: scsi host4: ahci Jan 13 21:24:24.770181 kernel: scsi host5: ahci Jan 13 21:24:24.770336 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:24:24.770350 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:24:24.770362 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:24:24.770375 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:24:24.770391 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:24:24.770404 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:24:24.762597 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:24:24.801568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:24.810548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:24:24.814279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:24:24.814343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:24:24.821211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:24:24.834011 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:24:24.834734 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:24:24.854086 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:24.859840 disk-uuid[554]: Primary Header is updated. Jan 13 21:24:24.859840 disk-uuid[554]: Secondary Entries is updated. Jan 13 21:24:24.859840 disk-uuid[554]: Secondary Header is updated. Jan 13 21:24:24.863897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:24.867897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:25.079008 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:25.079090 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:25.079101 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:25.079902 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:25.080897 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:24:25.081894 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:24:25.082898 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:24:25.082913 kernel: ata3.00: applying bridge limits Jan 13 21:24:25.083972 kernel: ata3.00: configured for UDMA/100 Jan 13 21:24:25.084901 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:24:25.135436 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:24:25.147542 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:24:25.147562 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:24:25.869928 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:24:25.870251 disk-uuid[563]: The operation has completed successfully. Jan 13 21:24:25.899062 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:24:25.899194 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:24:25.920054 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:24:25.923864 sh[591]: Success Jan 13 21:24:25.935899 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:24:25.969163 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:24:25.979304 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:24:25.982262 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:24:25.996254 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:24:25.996282 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:25.996294 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:24:25.997288 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:24:25.998887 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:24:26.002831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:24:26.003473 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:24:26.016992 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:24:26.019503 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:24:26.031488 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:26.031519 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:26.031530 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:26.033891 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:26.042698 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:24:26.044638 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:26.054241 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:24:26.059081 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:24:26.108425 ignition[685]: Ignition 2.19.0 Jan 13 21:24:26.108484 ignition[685]: Stage: fetch-offline Jan 13 21:24:26.108519 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:26.108529 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:26.108955 ignition[685]: parsed url from cmdline: "" Jan 13 21:24:26.108959 ignition[685]: no config URL provided Jan 13 21:24:26.108965 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:24:26.108974 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:24:26.109000 ignition[685]: op(1): [started] loading QEMU firmware config module Jan 13 21:24:26.109009 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:24:26.116554 ignition[685]: op(1): [finished] loading QEMU firmware config module Jan 13 21:24:26.139947 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:26.149037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:26.160135 ignition[685]: parsing config with SHA512: 10f419c6f6e46cf2fc47a4b4f7d02107231a8506d53c4bf24070ca8b24ef032d098777304612b317f3cf9012d5570c38f18046e5a083851d36e1c23548eb7a1e Jan 13 21:24:26.164925 unknown[685]: fetched base config from "system" Jan 13 21:24:26.164947 unknown[685]: fetched user config from "qemu" Jan 13 21:24:26.165500 ignition[685]: fetch-offline: fetch-offline passed Jan 13 21:24:26.165572 ignition[685]: Ignition finished successfully Jan 13 21:24:26.167467 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:26.174633 systemd-networkd[779]: lo: Link UP Jan 13 21:24:26.174644 systemd-networkd[779]: lo: Gained carrier Jan 13 21:24:26.176316 systemd-networkd[779]: Enumeration completed Jan 13 21:24:26.176429 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:26.176754 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:26.176758 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:26.177565 systemd-networkd[779]: eth0: Link UP Jan 13 21:24:26.177570 systemd-networkd[779]: eth0: Gained carrier Jan 13 21:24:26.177578 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:26.178436 systemd[1]: Reached target network.target - Network. Jan 13 21:24:26.180211 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:24:26.187055 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:24:26.193917 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:24:26.200045 ignition[782]: Ignition 2.19.0 Jan 13 21:24:26.200054 ignition[782]: Stage: kargs Jan 13 21:24:26.200202 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:26.200213 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:26.201035 ignition[782]: kargs: kargs passed Jan 13 21:24:26.201073 ignition[782]: Ignition finished successfully Jan 13 21:24:26.204256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:24:26.216028 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:24:26.229415 ignition[791]: Ignition 2.19.0 Jan 13 21:24:26.229428 ignition[791]: Stage: disks Jan 13 21:24:26.229592 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:26.232345 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:24:26.229604 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:26.665730 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:26.230412 ignition[791]: disks: disks passed Jan 13 21:24:26.667165 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:24:26.230457 ignition[791]: Ignition finished successfully Jan 13 21:24:26.668381 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:26.669403 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:26.671485 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:26.683114 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:24:26.694755 systemd-fsck[800]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:24:26.701283 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:24:26.719967 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:24:26.802889 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:24:26.803764 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:24:26.805977 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:26.821940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:26.823691 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:24:26.824022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:24:26.824066 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:24:26.832157 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Jan 13 21:24:26.832180 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:26.824088 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:26.835965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:26.835978 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:26.837900 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:26.838996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:26.851755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:24:26.861005 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:24:26.892323 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:24:26.896278 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:24:26.899698 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:24:26.903277 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:24:26.982883 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:26.990969 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:24:26.991774 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:24:26.997234 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:24:26.998706 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:27.015356 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:24:27.019132 ignition[923]: INFO : Ignition 2.19.0 Jan 13 21:24:27.019132 ignition[923]: INFO : Stage: mount Jan 13 21:24:27.020752 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:27.020752 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:27.020752 ignition[923]: INFO : mount: mount passed Jan 13 21:24:27.020752 ignition[923]: INFO : Ignition finished successfully Jan 13 21:24:27.026214 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:24:27.039937 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:24:27.047774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:24:27.058890 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (935) Jan 13 21:24:27.061002 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:24:27.061025 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:24:27.061036 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:24:27.063901 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:24:27.065419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:24:27.092341 ignition[952]: INFO : Ignition 2.19.0 Jan 13 21:24:27.092341 ignition[952]: INFO : Stage: files Jan 13 21:24:27.094021 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:27.094021 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:27.096732 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:24:27.098257 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:24:27.098257 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:24:27.101893 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:24:27.103396 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:24:27.103396 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:24:27.102530 unknown[952]: wrote ssh authorized keys file for user: core Jan 13 21:24:27.107409 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:27.107409 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:24:27.142144 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:24:27.239916 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:24:27.239916 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:24:27.244369 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:24:27.256015 systemd-networkd[779]: eth0: Gained IPv6LL Jan 13 21:24:27.570846 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:24:27.674983 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:24:27.674983 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:27.679536 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 21:24:28.105742 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:24:28.455565 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 21:24:28.455565 ignition[952]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 21:24:28.459592 ignition[952]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:24:28.482204 ignition[952]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:24:28.486719 ignition[952]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:24:28.488302 ignition[952]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:24:28.488302 ignition[952]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:28.488302 ignition[952]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:24:28.488302 ignition[952]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:28.488302 ignition[952]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:24:28.488302 ignition[952]: INFO : files: files passed Jan 13 21:24:28.488302 ignition[952]: INFO : Ignition finished successfully Jan 13 21:24:28.499864 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:24:28.511019 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:24:28.511841 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:24:28.519783 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:24:28.519936 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:24:28.525635 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:24:28.529370 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:28.529370 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:28.534047 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:24:28.532786 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:28.534234 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:24:28.547139 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:24:28.571306 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:24:28.571438 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:24:28.573919 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:24:28.576106 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:24:28.577185 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:24:28.587181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:24:28.602907 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:28.628139 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:24:28.636774 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:28.639206 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:28.641623 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:24:28.643540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:24:28.644614 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:24:28.647492 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:24:28.649708 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:24:28.651745 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:24:28.654135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:24:28.656635 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:24:28.659060 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:24:28.661171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:24:28.663811 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:24:28.666107 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:24:28.668299 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:24:28.670081 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:24:28.671162 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:24:28.673486 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:28.675647 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:28.678013 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:24:28.678965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:28.681579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:24:28.682613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:24:28.684898 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:24:28.686149 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:24:28.688834 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:24:28.690824 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:24:28.695937 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:28.697286 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:24:28.699633 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:24:28.700554 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:24:28.700655 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:24:28.702300 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:24:28.702389 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:24:28.704153 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:24:28.704274 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:24:28.707329 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:24:28.707435 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:24:28.720063 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:24:28.721115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:24:28.721233 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:28.725016 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:24:28.727359 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:24:28.728441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:28.730862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:24:28.731988 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:24:28.737798 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:24:28.738926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:24:28.751314 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:24:28.841339 ignition[1006]: INFO : Ignition 2.19.0 Jan 13 21:24:28.841339 ignition[1006]: INFO : Stage: umount Jan 13 21:24:28.843427 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:24:28.843427 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:24:28.843427 ignition[1006]: INFO : umount: umount passed Jan 13 21:24:28.843427 ignition[1006]: INFO : Ignition finished successfully Jan 13 21:24:28.849925 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:24:28.850123 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:24:28.852291 systemd[1]: Stopped target network.target - Network. Jan 13 21:24:28.854562 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:24:28.854634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:24:28.857383 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:24:28.857442 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:24:28.860198 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:24:28.860257 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:24:28.861223 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:24:28.861274 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:24:28.863232 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:24:28.865148 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:24:28.871929 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 13 21:24:28.872266 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:24:28.872403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:24:28.874528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:24:28.874595 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:28.876435 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:24:28.876563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:24:28.879063 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:24:28.879129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:28.885052 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:24:28.885805 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:24:28.885857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:24:28.886387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:24:28.886433 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:28.886722 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:24:28.886775 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:28.887378 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:28.898504 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:24:28.898642 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:24:28.904060 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:24:28.904242 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:28.905404 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:24:28.905453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:28.907527 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:24:28.907566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:28.907836 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:24:28.907896 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:24:28.908700 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:24:28.908746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:24:28.909512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:24:28.909558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:24:28.926215 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:24:28.927324 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:24:28.927404 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:28.928687 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:24:28.928736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:28.933093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:24:28.933223 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:24:29.281112 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:24:29.281256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:24:29.283491 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:24:29.285255 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:24:29.285321 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:24:29.303071 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:24:29.312331 systemd[1]: Switching root. Jan 13 21:24:29.344593 systemd-journald[192]: Journal stopped Jan 13 21:24:30.748179 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 13 21:24:30.748249 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:24:30.748267 kernel: SELinux: policy capability open_perms=1 Jan 13 21:24:30.748281 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:24:30.748293 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:24:30.748304 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:24:30.748315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:24:30.748327 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:24:30.748338 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:24:30.748349 kernel: audit: type=1403 audit(1736803469.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:24:30.748361 systemd[1]: Successfully loaded SELinux policy in 40.802ms. Jan 13 21:24:30.748388 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.398ms. Jan 13 21:24:30.748404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:24:30.748416 systemd[1]: Detected virtualization kvm. Jan 13 21:24:30.748433 systemd[1]: Detected architecture x86-64. Jan 13 21:24:30.748448 systemd[1]: Detected first boot. Jan 13 21:24:30.748460 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:24:30.748477 zram_generator::config[1051]: No configuration found. Jan 13 21:24:30.748490 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:24:30.748502 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:24:30.748517 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:24:30.748529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:24:30.748541 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:24:30.748553 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:24:30.748565 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:24:30.748581 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:24:30.748597 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:24:30.748609 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:24:30.748624 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:24:30.748636 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:24:30.748648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:24:30.748660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:24:30.748672 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:24:30.748684 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:24:30.748700 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:24:30.748720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:24:30.748732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:24:30.748747 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:24:30.748762 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:24:30.748774 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:24:30.748786 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:24:30.748798 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:24:30.748811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:24:30.748823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:24:30.748835 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:24:30.748849 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:24:30.748861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:24:30.748886 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:24:30.748898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:24:30.748911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:24:30.748922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:24:30.748934 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:24:30.748946 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:24:30.748958 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:24:30.748975 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:24:30.748988 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:30.748999 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:24:30.749011 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:24:30.749024 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:24:30.749040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:24:30.749052 systemd[1]: Reached target machines.target - Containers. Jan 13 21:24:30.749064 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:24:30.749076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:30.749091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:24:30.749103 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:24:30.749116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:30.749128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:30.749139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:30.749151 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:24:30.749163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:30.749175 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:24:30.749190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:24:30.749202 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:24:30.749216 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:24:30.749228 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:24:30.749240 kernel: fuse: init (API version 7.39) Jan 13 21:24:30.749252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:24:30.749264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:24:30.749276 kernel: loop: module loaded Jan 13 21:24:30.749287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:24:30.749306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:24:30.749318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:24:30.749330 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:24:30.749341 systemd[1]: Stopped verity-setup.service. Jan 13 21:24:30.749356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:30.749368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:24:30.749381 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:24:30.749392 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:24:30.749404 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:24:30.749435 systemd-journald[1125]: Collecting audit messages is disabled. Jan 13 21:24:30.749459 systemd-journald[1125]: Journal started Jan 13 21:24:30.749484 systemd-journald[1125]: Runtime Journal (/run/log/journal/08b5b488152446f29131325ef904d1f4) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:24:30.522087 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:24:30.537573 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:24:30.538058 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:24:30.752163 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:24:30.754434 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:24:30.755422 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:24:30.756898 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:24:30.758677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:24:30.758906 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:24:30.760668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:30.760893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:30.762564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:30.762765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:30.764649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:24:30.764904 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:24:30.766565 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:30.766780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:30.779897 kernel: ACPI: bus type drm_connector registered Jan 13 21:24:30.780097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:24:30.782830 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:30.783042 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:30.784656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:24:30.786289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:24:30.788147 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:24:30.802919 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:24:30.811001 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:24:30.813644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:24:30.814985 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:24:30.815016 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:24:30.817171 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:24:30.819665 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:24:30.824149 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:24:30.825570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:30.829318 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:24:30.833015 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:24:30.837228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:30.845377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:24:30.846994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:30.849104 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:24:30.871764 systemd-journald[1125]: Time spent on flushing to /var/log/journal/08b5b488152446f29131325ef904d1f4 is 19.177ms for 952 entries. Jan 13 21:24:30.871764 systemd-journald[1125]: System Journal (/var/log/journal/08b5b488152446f29131325ef904d1f4) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:24:30.908526 systemd-journald[1125]: Received client request to flush runtime journal. Jan 13 21:24:30.908562 kernel: loop0: detected capacity change from 0 to 142488 Jan 13 21:24:30.854099 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:24:30.860136 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:24:30.863358 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:24:30.865142 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:24:30.867122 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:24:30.879215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:24:30.882341 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:24:30.896107 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:24:30.898031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:24:30.912087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:24:30.914097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:24:30.923893 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:24:30.926084 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:24:30.937016 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:24:30.938258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:24:30.939997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:24:30.945324 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:24:30.957023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:24:30.958892 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 21:24:31.034456 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 13 21:24:31.034475 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 13 21:24:31.035895 kernel: loop2: detected capacity change from 0 to 140768 Jan 13 21:24:31.041999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:24:31.087218 kernel: loop3: detected capacity change from 0 to 142488 Jan 13 21:24:31.098028 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 21:24:31.104895 kernel: loop5: detected capacity change from 0 to 140768 Jan 13 21:24:31.115911 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:24:31.116520 (sd-merge)[1189]: Merged extensions into '/usr'. Jan 13 21:24:31.121574 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:24:31.121712 systemd[1]: Reloading... Jan 13 21:24:31.184903 zram_generator::config[1212]: No configuration found. Jan 13 21:24:31.339365 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:24:31.353182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:31.404577 systemd[1]: Reloading finished in 282 ms. Jan 13 21:24:31.445857 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:24:31.447605 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:24:31.463181 systemd[1]: Starting ensure-sysext.service... Jan 13 21:24:31.465325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:24:31.473629 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:24:31.473642 systemd[1]: Reloading... Jan 13 21:24:31.494256 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:24:31.494646 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:24:31.495649 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:24:31.495981 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 13 21:24:31.496062 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Jan 13 21:24:31.499787 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:31.499801 systemd-tmpfiles[1253]: Skipping /boot Jan 13 21:24:31.530740 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:24:31.530757 systemd-tmpfiles[1253]: Skipping /boot Jan 13 21:24:31.569889 zram_generator::config[1280]: No configuration found. Jan 13 21:24:31.666843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:24:31.717778 systemd[1]: Reloading finished in 243 ms. Jan 13 21:24:31.735185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:24:31.756301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:24:31.764659 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:24:31.767132 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:24:31.769340 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:24:31.774090 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:24:31.777137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:24:31.782141 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:24:31.787446 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.787736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:31.793591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:31.795845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:31.799152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:31.800335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:31.803459 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:24:31.804537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.805504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:31.805675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:31.811732 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:24:31.814495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:31.814665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:31.817597 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 13 21:24:31.822520 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.822761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:31.826230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:31.831190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:31.832916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:31.834304 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:24:31.835949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.837055 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:24:31.837748 augenrules[1349]: No rules Jan 13 21:24:31.839052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:31.839330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:31.841251 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:24:31.844950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:31.845186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:31.847003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:31.847508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:31.851611 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:24:31.854567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:24:31.864797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:24:31.874366 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:24:31.884893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.885047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:24:31.892576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:24:31.897033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:24:31.899972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:24:31.907612 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Jan 13 21:24:31.907067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:24:31.908473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:24:31.912122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:24:31.913338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:24:31.913375 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:24:31.913985 systemd[1]: Finished ensure-sysext.service. Jan 13 21:24:31.928173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:24:31.928364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:24:31.931514 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:24:31.932185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:24:31.944780 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:24:31.952390 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:24:31.952586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:24:31.961586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:24:31.974095 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:24:31.978317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:24:31.978503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:24:31.981533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:24:31.985894 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:24:32.005959 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:24:32.006434 systemd-resolved[1323]: Positive Trust Anchors: Jan 13 21:24:32.006737 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:24:32.006819 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:24:32.008314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:24:32.012341 systemd-resolved[1323]: Defaulting to hostname 'linux'. Jan 13 21:24:32.017048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:24:32.018338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:24:32.019612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:24:32.034968 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:24:32.040694 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:24:32.048256 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:24:32.048467 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:24:32.042257 systemd-networkd[1391]: lo: Link UP Jan 13 21:24:32.042262 systemd-networkd[1391]: lo: Gained carrier Jan 13 21:24:32.043805 systemd-networkd[1391]: Enumeration completed Jan 13 21:24:32.043888 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:24:32.044429 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:32.044433 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:24:32.045295 systemd-networkd[1391]: eth0: Link UP Jan 13 21:24:32.045299 systemd-networkd[1391]: eth0: Gained carrier Jan 13 21:24:32.045312 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:24:32.045906 systemd[1]: Reached target network.target - Network. Jan 13 21:24:32.053041 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:24:32.054629 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:24:32.057925 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:24:32.060518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:24:32.072899 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:24:33.055252 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:24:33.055311 systemd-timesyncd[1401]: Initial clock synchronization to Mon 2025-01-13 21:24:33.055173 UTC. Jan 13 21:24:33.055356 systemd-resolved[1323]: Clock change detected. Flushing caches. Jan 13 21:24:33.055414 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:24:33.057049 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:24:33.131859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:24:33.140383 kernel: kvm_amd: TSC scaling supported Jan 13 21:24:33.140416 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:24:33.140429 kernel: kvm_amd: Nested Paging enabled Jan 13 21:24:33.141354 kernel: kvm_amd: LBR virtualization supported Jan 13 21:24:33.141374 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:24:33.142351 kernel: kvm_amd: Virtual GIF supported Jan 13 21:24:33.159299 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:24:33.186462 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:24:33.201409 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:24:33.212164 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:33.240384 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:24:33.241912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:24:33.243065 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:24:33.244294 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:24:33.245626 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:24:33.247109 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:24:33.248375 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:24:33.249800 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:24:33.251067 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:24:33.251099 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:24:33.252035 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:24:33.253529 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:24:33.256021 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:24:33.265236 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:24:33.267484 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:24:33.269042 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:24:33.270291 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:24:33.271266 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:24:33.272270 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:33.272306 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:24:33.273262 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:24:33.275381 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:24:33.278350 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:24:33.278370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:24:33.285205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:24:33.286377 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:24:33.288580 jq[1429]: false Jan 13 21:24:33.289795 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:24:33.292421 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:24:33.294671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:24:33.306819 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:24:33.308153 dbus-daemon[1428]: [system] SELinux support is enabled Jan 13 21:24:33.312819 extend-filesystems[1430]: Found loop3 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found loop4 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found loop5 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found sr0 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda1 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda2 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda3 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found usr Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda4 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda6 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda7 Jan 13 21:24:33.313825 extend-filesystems[1430]: Found vda9 Jan 13 21:24:33.313825 extend-filesystems[1430]: Checking size of /dev/vda9 Jan 13 21:24:33.327263 extend-filesystems[1430]: Resized partition /dev/vda9 Jan 13 21:24:33.332298 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:24:33.315328 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:24:33.332449 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:24:33.350600 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1361) Jan 13 21:24:33.324601 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:24:33.329692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:24:33.330737 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:24:33.337212 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:24:33.342647 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:24:33.348608 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:24:33.352669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:24:33.353053 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:24:33.354536 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:24:33.354806 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:24:33.356604 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:24:33.356639 jq[1452]: true Jan 13 21:24:33.368173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:24:33.370688 update_engine[1450]: I20250113 21:24:33.368497 1450 main.cc:92] Flatcar Update Engine starting Jan 13 21:24:33.370688 update_engine[1450]: I20250113 21:24:33.369928 1450 update_check_scheduler.cc:74] Next update check in 4m36s Jan 13 21:24:33.368476 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:24:33.375300 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:24:33.375300 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:24:33.375300 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:24:33.382798 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jan 13 21:24:33.378015 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:24:33.378327 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:24:33.384751 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:24:33.391467 jq[1455]: true Jan 13 21:24:33.406225 tar[1454]: linux-amd64/helm Jan 13 21:24:33.408348 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:24:33.408381 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:24:33.411202 systemd-logind[1442]: New seat seat0. Jan 13 21:24:33.414833 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:24:33.419828 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:24:33.423998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:24:33.424236 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:24:33.429518 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:24:33.429675 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:24:33.443513 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:24:33.467140 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:24:33.469617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:24:33.475099 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:24:33.518440 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:24:33.545962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:24:33.582997 sshd_keygen[1449]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:24:33.611298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:24:33.651874 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:24:33.660347 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:54570.service - OpenSSH per-connection server daemon (10.0.0.1:54570). Jan 13 21:24:33.662543 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:24:33.662776 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:24:33.672888 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:24:33.709191 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:24:33.720643 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:24:33.724340 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:24:33.725604 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:24:33.750646 sshd[1506]: Accepted publickey for core from 10.0.0.1 port 54570 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:33.753431 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:33.762335 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:24:33.779588 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:24:33.784051 systemd-logind[1442]: New session 1 of user core. Jan 13 21:24:33.791114 containerd[1456]: time="2025-01-13T21:24:33.790964583Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:24:33.799610 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:24:33.839679 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:24:33.840916 containerd[1456]: time="2025-01-13T21:24:33.840791025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.842809 containerd[1456]: time="2025-01-13T21:24:33.842758485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:33.842809 containerd[1456]: time="2025-01-13T21:24:33.842805453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:24:33.842862 containerd[1456]: time="2025-01-13T21:24:33.842826974Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:24:33.843032 containerd[1456]: time="2025-01-13T21:24:33.843011600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:24:33.843055 containerd[1456]: time="2025-01-13T21:24:33.843031477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843124 containerd[1456]: time="2025-01-13T21:24:33.843107239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843144 containerd[1456]: time="2025-01-13T21:24:33.843122678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843555 containerd[1456]: time="2025-01-13T21:24:33.843339595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843555 containerd[1456]: time="2025-01-13T21:24:33.843357839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843555 containerd[1456]: time="2025-01-13T21:24:33.843370172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843555 containerd[1456]: time="2025-01-13T21:24:33.843379129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843555 containerd[1456]: time="2025-01-13T21:24:33.843477443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843739 containerd[1456]: time="2025-01-13T21:24:33.843707335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843857 containerd[1456]: time="2025-01-13T21:24:33.843840033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:24:33.843889 containerd[1456]: time="2025-01-13T21:24:33.843856244Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:24:33.843965 containerd[1456]: time="2025-01-13T21:24:33.843952144Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:24:33.844262 containerd[1456]: time="2025-01-13T21:24:33.844015563Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:24:33.844304 (systemd)[1519]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:24:33.859003 containerd[1456]: time="2025-01-13T21:24:33.858931836Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:24:33.859156 containerd[1456]: time="2025-01-13T21:24:33.859014521Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:24:33.859156 containerd[1456]: time="2025-01-13T21:24:33.859031843Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:24:33.859156 containerd[1456]: time="2025-01-13T21:24:33.859050308Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:24:33.859156 containerd[1456]: time="2025-01-13T21:24:33.859074854Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:24:33.859587 containerd[1456]: time="2025-01-13T21:24:33.859256845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:24:33.859680 containerd[1456]: time="2025-01-13T21:24:33.859625066Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:24:33.859788 containerd[1456]: time="2025-01-13T21:24:33.859769888Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:24:33.859814 containerd[1456]: time="2025-01-13T21:24:33.859787901Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:24:33.859814 containerd[1456]: time="2025-01-13T21:24:33.859800535Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:24:33.859848 containerd[1456]: time="2025-01-13T21:24:33.859814401Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859848 containerd[1456]: time="2025-01-13T21:24:33.859827435Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859848 containerd[1456]: time="2025-01-13T21:24:33.859839468Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859905 containerd[1456]: time="2025-01-13T21:24:33.859852873Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859905 containerd[1456]: time="2025-01-13T21:24:33.859867981Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859905 containerd[1456]: time="2025-01-13T21:24:33.859880906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859905 containerd[1456]: time="2025-01-13T21:24:33.859893770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859906293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859926080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859939035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859951989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859963751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859975293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.859988 containerd[1456]: time="2025-01-13T21:24:33.859988157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.859999929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860012102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860024766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860038331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860049392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860071193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860082754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860125 containerd[1456]: time="2025-01-13T21:24:33.860110867Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:24:33.860260 containerd[1456]: time="2025-01-13T21:24:33.860133339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.860260 containerd[1456]: time="2025-01-13T21:24:33.860145993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.863011 containerd[1456]: time="2025-01-13T21:24:33.862973445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:24:33.863437 containerd[1456]: time="2025-01-13T21:24:33.863410525Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:24:33.863460 containerd[1456]: time="2025-01-13T21:24:33.863438287Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:24:33.863460 containerd[1456]: time="2025-01-13T21:24:33.863455049Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:24:33.863505 containerd[1456]: time="2025-01-13T21:24:33.863470307Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:24:33.863505 containerd[1456]: time="2025-01-13T21:24:33.863482530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.863539 containerd[1456]: time="2025-01-13T21:24:33.863508268Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:24:33.863539 containerd[1456]: time="2025-01-13T21:24:33.863519239Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:24:33.863539 containerd[1456]: time="2025-01-13T21:24:33.863532744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:24:33.863852 containerd[1456]: time="2025-01-13T21:24:33.863800907Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:24:33.863976 containerd[1456]: time="2025-01-13T21:24:33.863858034Z" level=info msg="Connect containerd service" Jan 13 21:24:33.863976 containerd[1456]: time="2025-01-13T21:24:33.863908830Z" level=info msg="using legacy CRI server" Jan 13 21:24:33.863976 containerd[1456]: time="2025-01-13T21:24:33.863919479Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:24:33.864037 containerd[1456]: time="2025-01-13T21:24:33.864013215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.866923834Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867234096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867320247Z" level=info msg="Start subscribing containerd event" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867340545Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867379328Z" level=info msg="Start recovering state" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867486309Z" level=info msg="Start event monitor" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867505565Z" level=info msg="Start snapshots syncer" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867517657Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:24:33.867643 containerd[1456]: time="2025-01-13T21:24:33.867524821Z" level=info msg="Start streaming server" Jan 13 21:24:33.867918 containerd[1456]: time="2025-01-13T21:24:33.867897349Z" level=info msg="containerd successfully booted in 0.078289s" Jan 13 21:24:33.871394 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:24:33.961737 systemd[1519]: Queued start job for default target default.target. Jan 13 21:24:33.973574 systemd[1519]: Created slice app.slice - User Application Slice. Jan 13 21:24:33.973601 systemd[1519]: Reached target paths.target - Paths. Jan 13 21:24:33.973615 systemd[1519]: Reached target timers.target - Timers. Jan 13 21:24:33.975179 systemd[1519]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:24:33.976003 tar[1454]: linux-amd64/LICENSE Jan 13 21:24:33.976098 tar[1454]: linux-amd64/README.md Jan 13 21:24:33.990522 systemd[1519]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:24:33.990667 systemd[1519]: Reached target sockets.target - Sockets. Jan 13 21:24:33.990688 systemd[1519]: Reached target basic.target - Basic System. Jan 13 21:24:33.990726 systemd[1519]: Reached target default.target - Main User Target. Jan 13 21:24:33.990772 systemd[1519]: Startup finished in 138ms. Jan 13 21:24:33.991103 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:24:33.992978 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:24:34.001465 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:24:34.065166 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:54576.service - OpenSSH per-connection server daemon (10.0.0.1:54576). Jan 13 21:24:34.101532 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 54576 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:34.103363 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:34.107551 systemd-logind[1442]: New session 2 of user core. Jan 13 21:24:34.122408 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:24:34.177584 sshd[1537]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:34.184879 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:54576.service: Deactivated successfully. Jan 13 21:24:34.186525 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:24:34.188066 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:24:34.199500 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:54588.service - OpenSSH per-connection server daemon (10.0.0.1:54588). Jan 13 21:24:34.201628 systemd-logind[1442]: Removed session 2. Jan 13 21:24:34.227768 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 54588 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:34.229124 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:34.233398 systemd-logind[1442]: New session 3 of user core. Jan 13 21:24:34.244414 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:24:34.299349 sshd[1544]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:34.303604 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:54588.service: Deactivated successfully. Jan 13 21:24:34.305591 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:24:34.306230 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:24:34.307062 systemd-logind[1442]: Removed session 3. Jan 13 21:24:34.474485 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 13 21:24:34.478505 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:24:34.480488 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:24:34.493545 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:24:34.496325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:24:34.498660 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:24:34.530706 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:24:34.532629 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:24:34.532844 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:24:34.535231 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:24:35.483678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:24:35.485293 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:24:35.486564 systemd[1]: Startup finished in 738ms (kernel) + 6.285s (initrd) + 4.603s (userspace) = 11.628s. Jan 13 21:24:35.498626 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:24:36.030391 kubelet[1572]: E0113 21:24:36.030254 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:24:36.035844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:24:36.036052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:24:36.036441 systemd[1]: kubelet.service: Consumed 1.414s CPU time. Jan 13 21:24:44.310036 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:57588.service - OpenSSH per-connection server daemon (10.0.0.1:57588). Jan 13 21:24:44.341632 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 57588 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.343174 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.346831 systemd-logind[1442]: New session 4 of user core. Jan 13 21:24:44.357384 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:24:44.411157 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.421970 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:57588.service: Deactivated successfully. Jan 13 21:24:44.423592 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:24:44.424947 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:24:44.426192 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:57598.service - OpenSSH per-connection server daemon (10.0.0.1:57598). Jan 13 21:24:44.426950 systemd-logind[1442]: Removed session 4. Jan 13 21:24:44.457098 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 57598 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.458512 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.462316 systemd-logind[1442]: New session 5 of user core. Jan 13 21:24:44.473396 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:24:44.522056 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.541080 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:57598.service: Deactivated successfully. Jan 13 21:24:44.542705 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:24:44.544006 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:24:44.555573 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:57608.service - OpenSSH per-connection server daemon (10.0.0.1:57608). Jan 13 21:24:44.556540 systemd-logind[1442]: Removed session 5. Jan 13 21:24:44.581512 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 57608 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.582759 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.586216 systemd-logind[1442]: New session 6 of user core. Jan 13 21:24:44.597386 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:24:44.650241 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.667039 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:57608.service: Deactivated successfully. Jan 13 21:24:44.668667 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:24:44.670001 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:24:44.677509 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:57624.service - OpenSSH per-connection server daemon (10.0.0.1:57624). Jan 13 21:24:44.678423 systemd-logind[1442]: Removed session 6. Jan 13 21:24:44.704540 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 57624 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.705910 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.709460 systemd-logind[1442]: New session 7 of user core. Jan 13 21:24:44.727382 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:24:44.784978 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:24:44.785328 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:24:44.801290 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 13 21:24:44.803174 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:44.822385 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:57624.service: Deactivated successfully. Jan 13 21:24:44.824392 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:24:44.825951 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:24:44.833580 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:57634.service - OpenSSH per-connection server daemon (10.0.0.1:57634). Jan 13 21:24:44.834616 systemd-logind[1442]: Removed session 7. Jan 13 21:24:44.861457 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 57634 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:44.863072 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:44.867090 systemd-logind[1442]: New session 8 of user core. Jan 13 21:24:44.877394 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:24:44.929872 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:24:44.930300 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:24:44.933749 sudo[1620]: pam_unix(sudo:session): session closed for user root Jan 13 21:24:44.939742 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:24:44.940078 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:24:44.957522 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:24:44.959141 auditctl[1623]: No rules Jan 13 21:24:44.959570 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:24:44.959811 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:24:44.962414 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:24:44.991086 augenrules[1641]: No rules Jan 13 21:24:44.992849 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:24:44.994209 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 13 21:24:44.995979 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:45.009068 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:57634.service: Deactivated successfully. Jan 13 21:24:45.011258 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:24:45.013586 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:24:45.024537 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:57646.service - OpenSSH per-connection server daemon (10.0.0.1:57646). Jan 13 21:24:45.025502 systemd-logind[1442]: Removed session 8. Jan 13 21:24:45.052026 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 57646 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:24:45.053486 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:45.057495 systemd-logind[1442]: New session 9 of user core. Jan 13 21:24:45.075447 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:24:45.128223 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:24:45.128588 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:24:45.416525 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:24:45.416678 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:24:45.731891 dockerd[1671]: time="2025-01-13T21:24:45.731752523Z" level=info msg="Starting up" Jan 13 21:24:45.944107 systemd[1]: var-lib-docker-metacopy\x2dcheck3335355391-merged.mount: Deactivated successfully. Jan 13 21:24:45.977515 dockerd[1671]: time="2025-01-13T21:24:45.977461885Z" level=info msg="Loading containers: start." Jan 13 21:24:46.103302 kernel: Initializing XFRM netlink socket Jan 13 21:24:46.136332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:24:46.143490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:24:46.181124 systemd-networkd[1391]: docker0: Link UP Jan 13 21:24:46.343458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:24:46.348472 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:24:46.458674 kubelet[1780]: E0113 21:24:46.458541 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:24:46.461926 dockerd[1671]: time="2025-01-13T21:24:46.461884260Z" level=info msg="Loading containers: done." Jan 13 21:24:46.465756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:24:46.465977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:24:46.476347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck322269501-merged.mount: Deactivated successfully. Jan 13 21:24:46.480676 dockerd[1671]: time="2025-01-13T21:24:46.480633792Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:24:46.480780 dockerd[1671]: time="2025-01-13T21:24:46.480746013Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:24:46.480868 dockerd[1671]: time="2025-01-13T21:24:46.480846311Z" level=info msg="Daemon has completed initialization" Jan 13 21:24:46.517673 dockerd[1671]: time="2025-01-13T21:24:46.517592593Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:24:46.517799 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:24:47.434714 containerd[1456]: time="2025-01-13T21:24:47.434647078Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 21:24:48.149600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234890273.mount: Deactivated successfully. Jan 13 21:24:49.327504 containerd[1456]: time="2025-01-13T21:24:49.327438227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:49.328347 containerd[1456]: time="2025-01-13T21:24:49.328263224Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 21:24:49.329528 containerd[1456]: time="2025-01-13T21:24:49.329480697Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:49.332114 containerd[1456]: time="2025-01-13T21:24:49.332081455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:49.333927 containerd[1456]: time="2025-01-13T21:24:49.333894084Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 1.899177185s" Jan 13 21:24:49.333993 containerd[1456]: time="2025-01-13T21:24:49.333929951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 21:24:49.360946 containerd[1456]: time="2025-01-13T21:24:49.360880323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 21:24:51.139854 containerd[1456]: time="2025-01-13T21:24:51.139783732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:51.141664 containerd[1456]: time="2025-01-13T21:24:51.141576373Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 21:24:51.143034 containerd[1456]: time="2025-01-13T21:24:51.142991567Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:51.147651 containerd[1456]: time="2025-01-13T21:24:51.147580103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:51.148815 containerd[1456]: time="2025-01-13T21:24:51.148779732Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 1.787856369s" Jan 13 21:24:51.148854 containerd[1456]: time="2025-01-13T21:24:51.148821601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 21:24:51.173504 containerd[1456]: time="2025-01-13T21:24:51.173438887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 21:24:52.572212 containerd[1456]: time="2025-01-13T21:24:52.572140662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:52.573064 containerd[1456]: time="2025-01-13T21:24:52.573021414Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 21:24:52.574262 containerd[1456]: time="2025-01-13T21:24:52.574214221Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:52.576887 containerd[1456]: time="2025-01-13T21:24:52.576853721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:52.578416 containerd[1456]: time="2025-01-13T21:24:52.578380875Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.404895411s" Jan 13 21:24:52.578458 containerd[1456]: time="2025-01-13T21:24:52.578415580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 21:24:52.605169 containerd[1456]: time="2025-01-13T21:24:52.605130320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:24:54.221463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596045159.mount: Deactivated successfully. Jan 13 21:24:54.631880 containerd[1456]: time="2025-01-13T21:24:54.631733113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:54.632551 containerd[1456]: time="2025-01-13T21:24:54.632460828Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 21:24:54.633661 containerd[1456]: time="2025-01-13T21:24:54.633622376Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:54.635606 containerd[1456]: time="2025-01-13T21:24:54.635560651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:54.636257 containerd[1456]: time="2025-01-13T21:24:54.636214367Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.03087201s" Jan 13 21:24:54.636257 containerd[1456]: time="2025-01-13T21:24:54.636242650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 21:24:54.664998 containerd[1456]: time="2025-01-13T21:24:54.664945538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:24:55.173055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622530150.mount: Deactivated successfully. Jan 13 21:24:56.279579 containerd[1456]: time="2025-01-13T21:24:56.279516495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:56.280765 containerd[1456]: time="2025-01-13T21:24:56.280724310Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:24:56.282291 containerd[1456]: time="2025-01-13T21:24:56.282244321Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:56.285028 containerd[1456]: time="2025-01-13T21:24:56.284978258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:56.286243 containerd[1456]: time="2025-01-13T21:24:56.286185712Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.621198325s" Jan 13 21:24:56.286243 containerd[1456]: time="2025-01-13T21:24:56.286227521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:24:56.306267 containerd[1456]: time="2025-01-13T21:24:56.306210035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:24:56.716205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:24:56.736447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:24:56.886702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:24:56.891099 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:24:56.960791 kubelet[1997]: E0113 21:24:56.960728 1997 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:24:56.965376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:24:56.965592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:24:57.170023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058461803.mount: Deactivated successfully. Jan 13 21:24:57.175920 containerd[1456]: time="2025-01-13T21:24:57.175857645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:57.176634 containerd[1456]: time="2025-01-13T21:24:57.176565743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:24:57.177873 containerd[1456]: time="2025-01-13T21:24:57.177841104Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:57.180538 containerd[1456]: time="2025-01-13T21:24:57.180502395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:57.181489 containerd[1456]: time="2025-01-13T21:24:57.181428642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 875.16134ms" Jan 13 21:24:57.181534 containerd[1456]: time="2025-01-13T21:24:57.181490298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:24:57.204637 containerd[1456]: time="2025-01-13T21:24:57.204590659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 21:24:57.914018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420199242.mount: Deactivated successfully. Jan 13 21:24:59.754957 containerd[1456]: time="2025-01-13T21:24:59.754895411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:59.755584 containerd[1456]: time="2025-01-13T21:24:59.755510214Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 21:24:59.756907 containerd[1456]: time="2025-01-13T21:24:59.756872619Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:59.759916 containerd[1456]: time="2025-01-13T21:24:59.759882404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:24:59.761062 containerd[1456]: time="2025-01-13T21:24:59.761022912Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.556389863s" Jan 13 21:24:59.761098 containerd[1456]: time="2025-01-13T21:24:59.761063248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 21:25:01.834529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:01.845521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:01.862148 systemd[1]: Reloading requested from client PID 2140 ('systemctl') (unit session-9.scope)... Jan 13 21:25:01.862165 systemd[1]: Reloading... Jan 13 21:25:01.939971 zram_generator::config[2179]: No configuration found. Jan 13 21:25:02.116876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:02.191796 systemd[1]: Reloading finished in 329 ms. Jan 13 21:25:02.237378 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:25:02.237474 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:25:02.237735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:02.239269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:02.389807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:02.409612 (kubelet)[2227]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:02.448689 kubelet[2227]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:02.448689 kubelet[2227]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:02.448689 kubelet[2227]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:02.449698 kubelet[2227]: I0113 21:25:02.449653 2227 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:03.017126 kubelet[2227]: I0113 21:25:03.017080 2227 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:25:03.017126 kubelet[2227]: I0113 21:25:03.017111 2227 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:03.017358 kubelet[2227]: I0113 21:25:03.017337 2227 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:25:03.031775 kubelet[2227]: I0113 21:25:03.031735 2227 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:03.032421 kubelet[2227]: E0113 21:25:03.032394 2227 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.041802 kubelet[2227]: I0113 21:25:03.041778 2227 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:03.042896 kubelet[2227]: I0113 21:25:03.042864 2227 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:03.043051 kubelet[2227]: I0113 21:25:03.042891 2227 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:25:03.043146 kubelet[2227]: I0113 21:25:03.043062 2227 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:03.043146 kubelet[2227]: I0113 21:25:03.043072 2227 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:25:03.043785 kubelet[2227]: I0113 21:25:03.043759 2227 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:03.044395 kubelet[2227]: I0113 21:25:03.044373 2227 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:25:03.044395 kubelet[2227]: I0113 21:25:03.044390 2227 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:03.044525 kubelet[2227]: I0113 21:25:03.044410 2227 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:25:03.044525 kubelet[2227]: I0113 21:25:03.044428 2227 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:03.046619 kubelet[2227]: W0113 21:25:03.046562 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.046817 kubelet[2227]: E0113 21:25:03.046684 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.047839 kubelet[2227]: W0113 21:25:03.047786 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.047839 kubelet[2227]: E0113 21:25:03.047825 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.049006 kubelet[2227]: I0113 21:25:03.048984 2227 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:03.050146 kubelet[2227]: I0113 21:25:03.050122 2227 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:03.050200 kubelet[2227]: W0113 21:25:03.050187 2227 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:25:03.052052 kubelet[2227]: I0113 21:25:03.051906 2227 server.go:1264] "Started kubelet" Jan 13 21:25:03.053574 kubelet[2227]: I0113 21:25:03.053494 2227 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:03.056369 kubelet[2227]: I0113 21:25:03.056115 2227 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:25:03.057814 kubelet[2227]: I0113 21:25:03.057456 2227 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:03.057814 kubelet[2227]: E0113 21:25:03.057426 2227 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5d95840fbc2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:25:03.051881517 +0000 UTC m=+0.638363200,LastTimestamp:2025-01-13 21:25:03.051881517 +0000 UTC m=+0.638363200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:25:03.057814 kubelet[2227]: I0113 21:25:03.057691 2227 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:03.057814 kubelet[2227]: I0113 21:25:03.057723 2227 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:03.057814 kubelet[2227]: I0113 21:25:03.057788 2227 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:25:03.057987 kubelet[2227]: I0113 21:25:03.057876 2227 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:25:03.057987 kubelet[2227]: I0113 21:25:03.057958 2227 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:03.058368 kubelet[2227]: W0113 21:25:03.058322 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.058402 kubelet[2227]: E0113 21:25:03.058383 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.059147 kubelet[2227]: E0113 21:25:03.058923 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jan 13 21:25:03.059711 kubelet[2227]: I0113 21:25:03.059688 2227 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:03.059789 kubelet[2227]: I0113 21:25:03.059778 2227 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:03.060202 kubelet[2227]: E0113 21:25:03.060175 2227 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:03.060616 kubelet[2227]: I0113 21:25:03.060596 2227 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:03.072984 kubelet[2227]: I0113 21:25:03.072943 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:03.074123 kubelet[2227]: I0113 21:25:03.074100 2227 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:03.074174 kubelet[2227]: I0113 21:25:03.074134 2227 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:03.074174 kubelet[2227]: I0113 21:25:03.074154 2227 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:25:03.074238 kubelet[2227]: E0113 21:25:03.074201 2227 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:03.078118 kubelet[2227]: W0113 21:25:03.078069 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.078118 kubelet[2227]: E0113 21:25:03.078115 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.078973 kubelet[2227]: I0113 21:25:03.078948 2227 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:03.078973 kubelet[2227]: I0113 21:25:03.078962 2227 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:03.079040 kubelet[2227]: I0113 21:25:03.078978 2227 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:03.159176 kubelet[2227]: I0113 21:25:03.159139 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:03.159501 kubelet[2227]: E0113 21:25:03.159474 2227 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 21:25:03.174775 kubelet[2227]: E0113 21:25:03.174741 2227 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:03.260582 kubelet[2227]: E0113 21:25:03.260544 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jan 13 21:25:03.360786 kubelet[2227]: I0113 21:25:03.360679 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:03.360956 kubelet[2227]: E0113 21:25:03.360929 2227 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 21:25:03.375073 kubelet[2227]: E0113 21:25:03.375038 2227 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:03.661789 kubelet[2227]: E0113 21:25:03.661675 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jan 13 21:25:03.762860 kubelet[2227]: I0113 21:25:03.762837 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:03.763070 kubelet[2227]: E0113 21:25:03.763047 2227 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 21:25:03.775136 kubelet[2227]: E0113 21:25:03.775079 2227 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:25:03.843377 kubelet[2227]: I0113 21:25:03.843342 2227 policy_none.go:49] "None policy: Start" Jan 13 21:25:03.844050 kubelet[2227]: I0113 21:25:03.844020 2227 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:03.844050 kubelet[2227]: I0113 21:25:03.844053 2227 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:03.907947 kubelet[2227]: W0113 21:25:03.907881 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.907991 kubelet[2227]: E0113 21:25:03.907950 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:03.967070 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:25:03.985071 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:25:03.988365 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:25:03.998095 kubelet[2227]: I0113 21:25:03.998054 2227 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:03.998338 kubelet[2227]: I0113 21:25:03.998298 2227 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:03.998532 kubelet[2227]: I0113 21:25:03.998419 2227 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:03.999737 kubelet[2227]: E0113 21:25:03.999717 2227 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:25:04.276756 kubelet[2227]: W0113 21:25:04.276626 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.276756 kubelet[2227]: E0113 21:25:04.276679 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.348303 kubelet[2227]: W0113 21:25:04.348181 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.348303 kubelet[2227]: E0113 21:25:04.348267 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.447199 kubelet[2227]: W0113 21:25:04.447154 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.447199 kubelet[2227]: E0113 21:25:04.447198 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:04.462780 kubelet[2227]: E0113 21:25:04.462739 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jan 13 21:25:04.564755 kubelet[2227]: I0113 21:25:04.564641 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:04.564972 kubelet[2227]: E0113 21:25:04.564947 2227 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 21:25:04.576182 kubelet[2227]: I0113 21:25:04.576151 2227 topology_manager.go:215] "Topology Admit Handler" podUID="1dfb3bd052ef84556c429853a9992cf3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:25:04.576951 kubelet[2227]: I0113 21:25:04.576925 2227 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:25:04.577603 kubelet[2227]: I0113 21:25:04.577584 2227 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:25:04.583292 systemd[1]: Created slice kubepods-burstable-pod1dfb3bd052ef84556c429853a9992cf3.slice - libcontainer container kubepods-burstable-pod1dfb3bd052ef84556c429853a9992cf3.slice. Jan 13 21:25:04.615970 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 21:25:04.624989 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 21:25:04.667032 kubelet[2227]: I0113 21:25:04.666961 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:04.667032 kubelet[2227]: I0113 21:25:04.667019 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:04.667032 kubelet[2227]: I0113 21:25:04.667045 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:04.667567 kubelet[2227]: I0113 21:25:04.667063 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:04.667567 kubelet[2227]: I0113 21:25:04.667104 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:04.667567 kubelet[2227]: I0113 21:25:04.667137 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:04.667567 kubelet[2227]: I0113 21:25:04.667159 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:04.667567 kubelet[2227]: I0113 21:25:04.667179 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:04.667717 kubelet[2227]: I0113 21:25:04.667197 2227 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:04.914155 kubelet[2227]: E0113 21:25:04.914037 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:04.914813 containerd[1456]: time="2025-01-13T21:25:04.914768916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1dfb3bd052ef84556c429853a9992cf3,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:04.923952 kubelet[2227]: E0113 21:25:04.923926 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:04.924263 containerd[1456]: time="2025-01-13T21:25:04.924219479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:04.927496 kubelet[2227]: E0113 21:25:04.927473 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:04.927794 containerd[1456]: time="2025-01-13T21:25:04.927766030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:05.114251 kubelet[2227]: E0113 21:25:05.114205 2227 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:05.887808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463655447.mount: Deactivated successfully. Jan 13 21:25:05.894126 containerd[1456]: time="2025-01-13T21:25:05.894080294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:05.895002 containerd[1456]: time="2025-01-13T21:25:05.894957730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:05.895967 containerd[1456]: time="2025-01-13T21:25:05.895918792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:05.896798 containerd[1456]: time="2025-01-13T21:25:05.896767404Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:05.897846 containerd[1456]: time="2025-01-13T21:25:05.897801533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:25:05.898874 containerd[1456]: time="2025-01-13T21:25:05.898834019Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:05.899830 containerd[1456]: time="2025-01-13T21:25:05.899775445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:25:05.902051 containerd[1456]: time="2025-01-13T21:25:05.902012390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:25:05.903953 containerd[1456]: time="2025-01-13T21:25:05.903924015Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 989.060392ms" Jan 13 21:25:05.904507 containerd[1456]: time="2025-01-13T21:25:05.904478384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 980.199524ms" Jan 13 21:25:05.905144 containerd[1456]: time="2025-01-13T21:25:05.905107164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 977.283496ms" Jan 13 21:25:06.045840 containerd[1456]: time="2025-01-13T21:25:06.045760391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:06.045840 containerd[1456]: time="2025-01-13T21:25:06.045799186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:06.045840 containerd[1456]: time="2025-01-13T21:25:06.045809065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.046331 containerd[1456]: time="2025-01-13T21:25:06.045874661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.047725 containerd[1456]: time="2025-01-13T21:25:06.046651345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:06.047725 containerd[1456]: time="2025-01-13T21:25:06.046692274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:06.047725 containerd[1456]: time="2025-01-13T21:25:06.046706731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.047725 containerd[1456]: time="2025-01-13T21:25:06.046774262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.050338 containerd[1456]: time="2025-01-13T21:25:06.050185013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:06.050338 containerd[1456]: time="2025-01-13T21:25:06.050266338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:06.050338 containerd[1456]: time="2025-01-13T21:25:06.050300945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.050610 containerd[1456]: time="2025-01-13T21:25:06.050544123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:06.063497 kubelet[2227]: E0113 21:25:06.063432 2227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="3.2s" Jan 13 21:25:06.067439 systemd[1]: Started cri-containerd-95935e697737ec8edc11f08702a9a96f80f5aeef869ef907454d084857520e47.scope - libcontainer container 95935e697737ec8edc11f08702a9a96f80f5aeef869ef907454d084857520e47. Jan 13 21:25:06.072389 systemd[1]: Started cri-containerd-9537468fdcee278483e3cab1ab55c00bba56d648e77b5c1c1deb3b92b8ab6b4d.scope - libcontainer container 9537468fdcee278483e3cab1ab55c00bba56d648e77b5c1c1deb3b92b8ab6b4d. Jan 13 21:25:06.074459 systemd[1]: Started cri-containerd-c5266366636d37b426929dcd9976d4d0492daa42d2a274d09f6352bbdc284ee4.scope - libcontainer container c5266366636d37b426929dcd9976d4d0492daa42d2a274d09f6352bbdc284ee4. Jan 13 21:25:06.115023 containerd[1456]: time="2025-01-13T21:25:06.114970231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"95935e697737ec8edc11f08702a9a96f80f5aeef869ef907454d084857520e47\"" Jan 13 21:25:06.116840 kubelet[2227]: E0113 21:25:06.116615 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:06.119384 containerd[1456]: time="2025-01-13T21:25:06.119200699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1dfb3bd052ef84556c429853a9992cf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9537468fdcee278483e3cab1ab55c00bba56d648e77b5c1c1deb3b92b8ab6b4d\"" Jan 13 21:25:06.120964 containerd[1456]: time="2025-01-13T21:25:06.120937600Z" level=info msg="CreateContainer within sandbox \"95935e697737ec8edc11f08702a9a96f80f5aeef869ef907454d084857520e47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:25:06.121365 kubelet[2227]: E0113 21:25:06.121317 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:06.124199 containerd[1456]: time="2025-01-13T21:25:06.124165749Z" level=info msg="CreateContainer within sandbox \"9537468fdcee278483e3cab1ab55c00bba56d648e77b5c1c1deb3b92b8ab6b4d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:25:06.126481 containerd[1456]: time="2025-01-13T21:25:06.126454001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5266366636d37b426929dcd9976d4d0492daa42d2a274d09f6352bbdc284ee4\"" Jan 13 21:25:06.127318 kubelet[2227]: E0113 21:25:06.127102 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:06.128822 containerd[1456]: time="2025-01-13T21:25:06.128794994Z" level=info msg="CreateContainer within sandbox \"c5266366636d37b426929dcd9976d4d0492daa42d2a274d09f6352bbdc284ee4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:25:06.143833 containerd[1456]: time="2025-01-13T21:25:06.143757877Z" level=info msg="CreateContainer within sandbox \"95935e697737ec8edc11f08702a9a96f80f5aeef869ef907454d084857520e47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39c1b122fc2c17c4d2a2b539586e501832d8ef77901fddd45c5e5703ffe228be\"" Jan 13 21:25:06.144500 containerd[1456]: time="2025-01-13T21:25:06.144459336Z" level=info msg="StartContainer for \"39c1b122fc2c17c4d2a2b539586e501832d8ef77901fddd45c5e5703ffe228be\"" Jan 13 21:25:06.147265 containerd[1456]: time="2025-01-13T21:25:06.147237089Z" level=info msg="CreateContainer within sandbox \"9537468fdcee278483e3cab1ab55c00bba56d648e77b5c1c1deb3b92b8ab6b4d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7af87404118f61f8a8888c86692709288b0d9261179d09b57631918b41c0ebc2\"" Jan 13 21:25:06.147778 containerd[1456]: time="2025-01-13T21:25:06.147636557Z" level=info msg="StartContainer for \"7af87404118f61f8a8888c86692709288b0d9261179d09b57631918b41c0ebc2\"" Jan 13 21:25:06.150485 containerd[1456]: time="2025-01-13T21:25:06.150449899Z" level=info msg="CreateContainer within sandbox \"c5266366636d37b426929dcd9976d4d0492daa42d2a274d09f6352bbdc284ee4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ad7dcaefbaed469cb63aa4f6dbd931c4c18d438a1288a71ac84fde188759a2c\"" Jan 13 21:25:06.150955 containerd[1456]: time="2025-01-13T21:25:06.150897521Z" level=info msg="StartContainer for \"6ad7dcaefbaed469cb63aa4f6dbd931c4c18d438a1288a71ac84fde188759a2c\"" Jan 13 21:25:06.166916 kubelet[2227]: I0113 21:25:06.166561 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:06.166916 kubelet[2227]: E0113 21:25:06.166884 2227 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jan 13 21:25:06.175462 systemd[1]: Started cri-containerd-39c1b122fc2c17c4d2a2b539586e501832d8ef77901fddd45c5e5703ffe228be.scope - libcontainer container 39c1b122fc2c17c4d2a2b539586e501832d8ef77901fddd45c5e5703ffe228be. Jan 13 21:25:06.186462 systemd[1]: Started cri-containerd-6ad7dcaefbaed469cb63aa4f6dbd931c4c18d438a1288a71ac84fde188759a2c.scope - libcontainer container 6ad7dcaefbaed469cb63aa4f6dbd931c4c18d438a1288a71ac84fde188759a2c. Jan 13 21:25:06.187981 systemd[1]: Started cri-containerd-7af87404118f61f8a8888c86692709288b0d9261179d09b57631918b41c0ebc2.scope - libcontainer container 7af87404118f61f8a8888c86692709288b0d9261179d09b57631918b41c0ebc2. Jan 13 21:25:06.229623 containerd[1456]: time="2025-01-13T21:25:06.229564081Z" level=info msg="StartContainer for \"39c1b122fc2c17c4d2a2b539586e501832d8ef77901fddd45c5e5703ffe228be\" returns successfully" Jan 13 21:25:06.235305 containerd[1456]: time="2025-01-13T21:25:06.233854233Z" level=info msg="StartContainer for \"7af87404118f61f8a8888c86692709288b0d9261179d09b57631918b41c0ebc2\" returns successfully" Jan 13 21:25:06.235305 containerd[1456]: time="2025-01-13T21:25:06.233912295Z" level=info msg="StartContainer for \"6ad7dcaefbaed469cb63aa4f6dbd931c4c18d438a1288a71ac84fde188759a2c\" returns successfully" Jan 13 21:25:06.267289 kubelet[2227]: W0113 21:25:06.267203 2227 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:06.267289 kubelet[2227]: E0113 21:25:06.267298 2227 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jan 13 21:25:07.096787 kubelet[2227]: E0113 21:25:07.096743 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:07.097829 kubelet[2227]: E0113 21:25:07.097570 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:07.098824 kubelet[2227]: E0113 21:25:07.098800 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:08.100905 kubelet[2227]: E0113 21:25:08.100847 2227 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:08.666549 kubelet[2227]: E0113 21:25:08.666504 2227 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 21:25:09.014681 kubelet[2227]: E0113 21:25:09.014570 2227 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 21:25:09.091343 kubelet[2227]: I0113 21:25:09.091321 2227 apiserver.go:52] "Watching apiserver" Jan 13 21:25:09.158200 kubelet[2227]: I0113 21:25:09.158164 2227 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:25:09.267025 kubelet[2227]: E0113 21:25:09.266895 2227 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:25:09.368550 kubelet[2227]: I0113 21:25:09.368522 2227 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:09.373900 kubelet[2227]: I0113 21:25:09.373757 2227 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:25:10.327353 systemd[1]: Reloading requested from client PID 2508 ('systemctl') (unit session-9.scope)... Jan 13 21:25:10.327367 systemd[1]: Reloading... Jan 13 21:25:10.406324 zram_generator::config[2550]: No configuration found. Jan 13 21:25:10.508816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:25:10.597138 systemd[1]: Reloading finished in 269 ms. Jan 13 21:25:10.643414 kubelet[2227]: I0113 21:25:10.643376 2227 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:10.643617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:10.666472 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:25:10.666775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:10.666832 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 117.9M memory peak, 0B memory swap peak. Jan 13 21:25:10.679533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:25:10.826912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:25:10.831989 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:25:10.870011 kubelet[2592]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:10.870011 kubelet[2592]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:25:10.870011 kubelet[2592]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:25:10.870011 kubelet[2592]: I0113 21:25:10.869980 2592 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:25:10.874502 kubelet[2592]: I0113 21:25:10.874472 2592 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:25:10.874502 kubelet[2592]: I0113 21:25:10.874491 2592 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:25:10.874653 kubelet[2592]: I0113 21:25:10.874633 2592 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:25:10.875754 kubelet[2592]: I0113 21:25:10.875727 2592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:25:10.876659 kubelet[2592]: I0113 21:25:10.876628 2592 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:25:10.883045 kubelet[2592]: I0113 21:25:10.883021 2592 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:25:10.884578 kubelet[2592]: I0113 21:25:10.884540 2592 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:25:10.884715 kubelet[2592]: I0113 21:25:10.884569 2592 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:25:10.884793 kubelet[2592]: I0113 21:25:10.884718 2592 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:25:10.884793 kubelet[2592]: I0113 21:25:10.884728 2592 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:25:10.884793 kubelet[2592]: I0113 21:25:10.884770 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:10.884867 kubelet[2592]: I0113 21:25:10.884848 2592 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:25:10.884867 kubelet[2592]: I0113 21:25:10.884862 2592 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:25:10.884910 kubelet[2592]: I0113 21:25:10.884879 2592 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:25:10.884910 kubelet[2592]: I0113 21:25:10.884896 2592 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:25:10.889312 kubelet[2592]: I0113 21:25:10.888002 2592 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:25:10.889312 kubelet[2592]: I0113 21:25:10.888232 2592 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:25:10.889312 kubelet[2592]: I0113 21:25:10.889076 2592 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:25:10.889473 kubelet[2592]: I0113 21:25:10.889456 2592 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:25:10.890111 kubelet[2592]: I0113 21:25:10.889576 2592 server.go:1264] "Started kubelet" Jan 13 21:25:10.890111 kubelet[2592]: I0113 21:25:10.889889 2592 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:25:10.891828 kubelet[2592]: I0113 21:25:10.891377 2592 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:25:10.894552 kubelet[2592]: I0113 21:25:10.894519 2592 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:25:10.897401 kubelet[2592]: I0113 21:25:10.895814 2592 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:25:10.897401 kubelet[2592]: I0113 21:25:10.895908 2592 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:25:10.897401 kubelet[2592]: I0113 21:25:10.896063 2592 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:25:10.898762 kubelet[2592]: I0113 21:25:10.897479 2592 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:25:10.898762 kubelet[2592]: I0113 21:25:10.897581 2592 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:25:10.899039 kubelet[2592]: E0113 21:25:10.899005 2592 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:25:10.900263 kubelet[2592]: I0113 21:25:10.899699 2592 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:25:10.904871 kubelet[2592]: I0113 21:25:10.904833 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:25:10.905994 kubelet[2592]: I0113 21:25:10.905964 2592 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:25:10.906041 kubelet[2592]: I0113 21:25:10.905998 2592 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:25:10.906041 kubelet[2592]: I0113 21:25:10.906015 2592 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:25:10.906105 kubelet[2592]: E0113 21:25:10.906053 2592 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:25:10.933179 kubelet[2592]: I0113 21:25:10.933144 2592 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:25:10.933179 kubelet[2592]: I0113 21:25:10.933166 2592 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:25:10.933179 kubelet[2592]: I0113 21:25:10.933186 2592 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:25:10.933417 kubelet[2592]: I0113 21:25:10.933378 2592 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:25:10.933417 kubelet[2592]: I0113 21:25:10.933391 2592 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:25:10.933417 kubelet[2592]: I0113 21:25:10.933413 2592 policy_none.go:49] "None policy: Start" Jan 13 21:25:10.933876 kubelet[2592]: I0113 21:25:10.933861 2592 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:25:10.933926 kubelet[2592]: I0113 21:25:10.933879 2592 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:25:10.934034 kubelet[2592]: I0113 21:25:10.934015 2592 state_mem.go:75] "Updated machine memory state" Jan 13 21:25:10.937829 kubelet[2592]: I0113 21:25:10.937771 2592 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:25:10.938019 kubelet[2592]: I0113 21:25:10.937980 2592 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:25:10.938118 kubelet[2592]: I0113 21:25:10.938097 2592 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:25:11.000242 kubelet[2592]: I0113 21:25:11.000202 2592 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:25:11.006454 kubelet[2592]: I0113 21:25:11.006415 2592 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:25:11.006542 kubelet[2592]: I0113 21:25:11.006521 2592 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:25:11.006716 kubelet[2592]: I0113 21:25:11.006699 2592 topology_manager.go:215] "Topology Admit Handler" podUID="1dfb3bd052ef84556c429853a9992cf3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:25:11.198974 kubelet[2592]: I0113 21:25:11.197945 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:11.198974 kubelet[2592]: I0113 21:25:11.198003 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:11.198974 kubelet[2592]: I0113 21:25:11.198031 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:25:11.198974 kubelet[2592]: I0113 21:25:11.198051 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:11.198974 kubelet[2592]: I0113 21:25:11.198081 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:11.199243 kubelet[2592]: I0113 21:25:11.198100 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:11.199243 kubelet[2592]: I0113 21:25:11.198118 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:11.199243 kubelet[2592]: I0113 21:25:11.198137 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dfb3bd052ef84556c429853a9992cf3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1dfb3bd052ef84556c429853a9992cf3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:25:11.199243 kubelet[2592]: I0113 21:25:11.198158 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:11.244526 kubelet[2592]: I0113 21:25:11.244186 2592 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:25:11.244526 kubelet[2592]: I0113 21:25:11.244313 2592 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:25:11.500243 kubelet[2592]: E0113 21:25:11.500104 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:11.545678 kubelet[2592]: E0113 21:25:11.545643 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:11.545857 kubelet[2592]: E0113 21:25:11.545826 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:11.885771 kubelet[2592]: I0113 21:25:11.885662 2592 apiserver.go:52] "Watching apiserver" Jan 13 21:25:11.896794 kubelet[2592]: I0113 21:25:11.896764 2592 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:25:11.916669 kubelet[2592]: E0113 21:25:11.916623 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:11.917528 kubelet[2592]: E0113 21:25:11.917504 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:12.163957 kubelet[2592]: E0113 21:25:12.163551 2592 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 21:25:12.164060 kubelet[2592]: E0113 21:25:12.163969 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:12.303694 kubelet[2592]: I0113 21:25:12.303630 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3036068379999999 podStartE2EDuration="1.303606838s" podCreationTimestamp="2025-01-13 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:12.303578935 +0000 UTC m=+1.466916398" watchObservedRunningTime="2025-01-13 21:25:12.303606838 +0000 UTC m=+1.466944291" Jan 13 21:25:12.303880 kubelet[2592]: I0113 21:25:12.303770 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.30376481 podStartE2EDuration="1.30376481s" podCreationTimestamp="2025-01-13 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:12.240559117 +0000 UTC m=+1.403896570" watchObservedRunningTime="2025-01-13 21:25:12.30376481 +0000 UTC m=+1.467102263" Jan 13 21:25:12.317687 kubelet[2592]: I0113 21:25:12.317610 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.317591973 podStartE2EDuration="1.317591973s" podCreationTimestamp="2025-01-13 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:12.31720794 +0000 UTC m=+1.480545403" watchObservedRunningTime="2025-01-13 21:25:12.317591973 +0000 UTC m=+1.480929416" Jan 13 21:25:12.339600 sudo[2628]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:25:12.339956 sudo[2628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:25:12.830213 sudo[2628]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:12.918459 kubelet[2592]: E0113 21:25:12.918425 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:12.919571 kubelet[2592]: E0113 21:25:12.918800 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:13.119006 kubelet[2592]: E0113 21:25:13.118876 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:13.920142 kubelet[2592]: E0113 21:25:13.920095 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:14.176661 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 13 21:25:14.178966 sshd[1649]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:14.183906 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:57646.service: Deactivated successfully. Jan 13 21:25:14.185945 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:25:14.186159 systemd[1]: session-9.scope: Consumed 4.432s CPU time, 192.9M memory peak, 0B memory swap peak. Jan 13 21:25:14.186730 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:25:14.187949 systemd-logind[1442]: Removed session 9. Jan 13 21:25:17.456950 kubelet[2592]: E0113 21:25:17.456897 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:17.926816 kubelet[2592]: E0113 21:25:17.926779 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:18.863118 update_engine[1450]: I20250113 21:25:18.863034 1450 update_attempter.cc:509] Updating boot flags... Jan 13 21:25:18.889346 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2677) Jan 13 21:25:18.927364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2678) Jan 13 21:25:18.930994 kubelet[2592]: E0113 21:25:18.930957 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:18.960458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2678) Jan 13 21:25:22.829859 kubelet[2592]: E0113 21:25:22.829806 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:23.123465 kubelet[2592]: E0113 21:25:23.123336 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:25.891340 kubelet[2592]: I0113 21:25:25.891291 2592 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:25:25.891878 containerd[1456]: time="2025-01-13T21:25:25.891810565Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:25:25.892124 kubelet[2592]: I0113 21:25:25.892045 2592 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:25:26.432623 kubelet[2592]: I0113 21:25:26.432569 2592 topology_manager.go:215] "Topology Admit Handler" podUID="ae7e6b78-9afe-48ab-b16d-c57e69aeb846" podNamespace="kube-system" podName="kube-proxy-9qbk7" Jan 13 21:25:26.440606 systemd[1]: Created slice kubepods-besteffort-podae7e6b78_9afe_48ab_b16d_c57e69aeb846.slice - libcontainer container kubepods-besteffort-podae7e6b78_9afe_48ab_b16d_c57e69aeb846.slice. Jan 13 21:25:26.446686 kubelet[2592]: I0113 21:25:26.446636 2592 topology_manager.go:215] "Topology Admit Handler" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" podNamespace="kube-system" podName="cilium-njdgv" Jan 13 21:25:26.457257 systemd[1]: Created slice kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice - libcontainer container kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice. Jan 13 21:25:26.484226 kubelet[2592]: I0113 21:25:26.484028 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-xtables-lock\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.484226 kubelet[2592]: I0113 21:25:26.484151 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-cgroup\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.484226 kubelet[2592]: I0113 21:25:26.484199 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-lib-modules\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.484226 kubelet[2592]: I0113 21:25:26.484217 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-clustermesh-secrets\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.484226 kubelet[2592]: I0113 21:25:26.484245 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae7e6b78-9afe-48ab-b16d-c57e69aeb846-lib-modules\") pod \"kube-proxy-9qbk7\" (UID: \"ae7e6b78-9afe-48ab-b16d-c57e69aeb846\") " pod="kube-system/kube-proxy-9qbk7" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484259 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cni-path\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484289 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hubble-tls\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484356 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-net\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484374 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-kernel\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484446 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-etc-cni-netd\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485376 kubelet[2592]: I0113 21:25:26.484471 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae7e6b78-9afe-48ab-b16d-c57e69aeb846-kube-proxy\") pod \"kube-proxy-9qbk7\" (UID: \"ae7e6b78-9afe-48ab-b16d-c57e69aeb846\") " pod="kube-system/kube-proxy-9qbk7" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484487 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae7e6b78-9afe-48ab-b16d-c57e69aeb846-xtables-lock\") pod \"kube-proxy-9qbk7\" (UID: \"ae7e6b78-9afe-48ab-b16d-c57e69aeb846\") " pod="kube-system/kube-proxy-9qbk7" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484509 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-run\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484533 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-bpf-maps\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484550 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hostproc\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484572 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6r5\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-kube-api-access-nk6r5\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.485522 kubelet[2592]: I0113 21:25:26.484592 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6gg\" (UniqueName: \"kubernetes.io/projected/ae7e6b78-9afe-48ab-b16d-c57e69aeb846-kube-api-access-vl6gg\") pod \"kube-proxy-9qbk7\" (UID: \"ae7e6b78-9afe-48ab-b16d-c57e69aeb846\") " pod="kube-system/kube-proxy-9qbk7" Jan 13 21:25:26.485674 kubelet[2592]: I0113 21:25:26.484605 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-config-path\") pod \"cilium-njdgv\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " pod="kube-system/cilium-njdgv" Jan 13 21:25:26.749602 kubelet[2592]: E0113 21:25:26.749464 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.750416 containerd[1456]: time="2025-01-13T21:25:26.750203218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qbk7,Uid:ae7e6b78-9afe-48ab-b16d-c57e69aeb846,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:26.760332 kubelet[2592]: E0113 21:25:26.760309 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.761240 containerd[1456]: time="2025-01-13T21:25:26.761195428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njdgv,Uid:70c4a7ea-eb2f-4033-9c80-57a860d03e2f,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:26.777338 containerd[1456]: time="2025-01-13T21:25:26.776945383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:26.777338 containerd[1456]: time="2025-01-13T21:25:26.777012380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:26.777338 containerd[1456]: time="2025-01-13T21:25:26.777025615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:26.777338 containerd[1456]: time="2025-01-13T21:25:26.777094334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:26.793529 containerd[1456]: time="2025-01-13T21:25:26.793413165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:26.793676 containerd[1456]: time="2025-01-13T21:25:26.793543441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:26.793676 containerd[1456]: time="2025-01-13T21:25:26.793585060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:26.793823 containerd[1456]: time="2025-01-13T21:25:26.793702091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:26.803503 systemd[1]: Started cri-containerd-e1c4695d8b86954b832da7d6f2d0281db6b86cf366979865d4c369f60916f824.scope - libcontainer container e1c4695d8b86954b832da7d6f2d0281db6b86cf366979865d4c369f60916f824. Jan 13 21:25:26.807567 systemd[1]: Started cri-containerd-2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a.scope - libcontainer container 2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a. Jan 13 21:25:26.830215 containerd[1456]: time="2025-01-13T21:25:26.830162842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qbk7,Uid:ae7e6b78-9afe-48ab-b16d-c57e69aeb846,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1c4695d8b86954b832da7d6f2d0281db6b86cf366979865d4c369f60916f824\"" Jan 13 21:25:26.831158 kubelet[2592]: E0113 21:25:26.831115 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.834709 containerd[1456]: time="2025-01-13T21:25:26.834652420Z" level=info msg="CreateContainer within sandbox \"e1c4695d8b86954b832da7d6f2d0281db6b86cf366979865d4c369f60916f824\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:25:26.835129 containerd[1456]: time="2025-01-13T21:25:26.835100837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-njdgv,Uid:70c4a7ea-eb2f-4033-9c80-57a860d03e2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\"" Jan 13 21:25:26.835643 kubelet[2592]: E0113 21:25:26.835608 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.839168 containerd[1456]: time="2025-01-13T21:25:26.839135157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:25:26.861355 containerd[1456]: time="2025-01-13T21:25:26.861304255Z" level=info msg="CreateContainer within sandbox \"e1c4695d8b86954b832da7d6f2d0281db6b86cf366979865d4c369f60916f824\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d5e7107bfd0cfe034e807597b22e592b1f7d8d5a5e73cc36220639d1edbb3e3\"" Jan 13 21:25:26.862593 containerd[1456]: time="2025-01-13T21:25:26.862071184Z" level=info msg="StartContainer for \"0d5e7107bfd0cfe034e807597b22e592b1f7d8d5a5e73cc36220639d1edbb3e3\"" Jan 13 21:25:26.897434 systemd[1]: Started cri-containerd-0d5e7107bfd0cfe034e807597b22e592b1f7d8d5a5e73cc36220639d1edbb3e3.scope - libcontainer container 0d5e7107bfd0cfe034e807597b22e592b1f7d8d5a5e73cc36220639d1edbb3e3. Jan 13 21:25:26.928553 containerd[1456]: time="2025-01-13T21:25:26.928503820Z" level=info msg="StartContainer for \"0d5e7107bfd0cfe034e807597b22e592b1f7d8d5a5e73cc36220639d1edbb3e3\" returns successfully" Jan 13 21:25:26.946511 kubelet[2592]: E0113 21:25:26.946066 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:26.953071 kubelet[2592]: I0113 21:25:26.952788 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9qbk7" podStartSLOduration=0.952769869 podStartE2EDuration="952.769869ms" podCreationTimestamp="2025-01-13 21:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:26.952689267 +0000 UTC m=+16.116026720" watchObservedRunningTime="2025-01-13 21:25:26.952769869 +0000 UTC m=+16.116107322" Jan 13 21:25:26.991337 kubelet[2592]: I0113 21:25:26.991266 2592 topology_manager.go:215] "Topology Admit Handler" podUID="ddd16cbf-ae90-4f59-8bc9-9d985298a540" podNamespace="kube-system" podName="cilium-operator-599987898-76ztr" Jan 13 21:25:27.002108 systemd[1]: Created slice kubepods-besteffort-podddd16cbf_ae90_4f59_8bc9_9d985298a540.slice - libcontainer container kubepods-besteffort-podddd16cbf_ae90_4f59_8bc9_9d985298a540.slice. Jan 13 21:25:27.089617 kubelet[2592]: I0113 21:25:27.089545 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9pf\" (UniqueName: \"kubernetes.io/projected/ddd16cbf-ae90-4f59-8bc9-9d985298a540-kube-api-access-vt9pf\") pod \"cilium-operator-599987898-76ztr\" (UID: \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\") " pod="kube-system/cilium-operator-599987898-76ztr" Jan 13 21:25:27.089617 kubelet[2592]: I0113 21:25:27.089593 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddd16cbf-ae90-4f59-8bc9-9d985298a540-cilium-config-path\") pod \"cilium-operator-599987898-76ztr\" (UID: \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\") " pod="kube-system/cilium-operator-599987898-76ztr" Jan 13 21:25:27.311703 kubelet[2592]: E0113 21:25:27.311656 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:27.312341 containerd[1456]: time="2025-01-13T21:25:27.312161085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-76ztr,Uid:ddd16cbf-ae90-4f59-8bc9-9d985298a540,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:27.336841 containerd[1456]: time="2025-01-13T21:25:27.336749643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:27.336980 containerd[1456]: time="2025-01-13T21:25:27.336832178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:27.336980 containerd[1456]: time="2025-01-13T21:25:27.336852487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:27.337024 containerd[1456]: time="2025-01-13T21:25:27.336962685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:27.357414 systemd[1]: Started cri-containerd-b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc.scope - libcontainer container b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc. Jan 13 21:25:27.393085 containerd[1456]: time="2025-01-13T21:25:27.393033215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-76ztr,Uid:ddd16cbf-ae90-4f59-8bc9-9d985298a540,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\"" Jan 13 21:25:27.393747 kubelet[2592]: E0113 21:25:27.393708 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:34.523019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265324801.mount: Deactivated successfully. Jan 13 21:25:36.877524 containerd[1456]: time="2025-01-13T21:25:36.877469631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:36.878392 containerd[1456]: time="2025-01-13T21:25:36.878359236Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Jan 13 21:25:36.879549 containerd[1456]: time="2025-01-13T21:25:36.879523578Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:36.881299 containerd[1456]: time="2025-01-13T21:25:36.881222595Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.041862724s" Jan 13 21:25:36.881299 containerd[1456]: time="2025-01-13T21:25:36.881259405Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:25:36.882695 containerd[1456]: time="2025-01-13T21:25:36.882668127Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:25:36.883554 containerd[1456]: time="2025-01-13T21:25:36.883503438Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:25:36.898792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055198950.mount: Deactivated successfully. Jan 13 21:25:36.899740 containerd[1456]: time="2025-01-13T21:25:36.899703813Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\"" Jan 13 21:25:36.900299 containerd[1456]: time="2025-01-13T21:25:36.900248428Z" level=info msg="StartContainer for \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\"" Jan 13 21:25:36.935422 systemd[1]: Started cri-containerd-fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32.scope - libcontainer container fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32. Jan 13 21:25:36.961563 containerd[1456]: time="2025-01-13T21:25:36.961518918Z" level=info msg="StartContainer for \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\" returns successfully" Jan 13 21:25:36.971685 systemd[1]: cri-containerd-fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32.scope: Deactivated successfully. Jan 13 21:25:37.050972 kubelet[2592]: E0113 21:25:37.050939 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:37.480873 containerd[1456]: time="2025-01-13T21:25:37.478089678Z" level=info msg="shim disconnected" id=fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32 namespace=k8s.io Jan 13 21:25:37.480873 containerd[1456]: time="2025-01-13T21:25:37.480869720Z" level=warning msg="cleaning up after shim disconnected" id=fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32 namespace=k8s.io Jan 13 21:25:37.480873 containerd[1456]: time="2025-01-13T21:25:37.480882253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:37.896102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32-rootfs.mount: Deactivated successfully. Jan 13 21:25:38.053570 kubelet[2592]: E0113 21:25:38.053501 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:38.055601 containerd[1456]: time="2025-01-13T21:25:38.055545568Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:25:38.257577 containerd[1456]: time="2025-01-13T21:25:38.257451377Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\"" Jan 13 21:25:38.258340 containerd[1456]: time="2025-01-13T21:25:38.258297961Z" level=info msg="StartContainer for \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\"" Jan 13 21:25:38.292447 systemd[1]: Started cri-containerd-ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4.scope - libcontainer container ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4. Jan 13 21:25:38.318843 containerd[1456]: time="2025-01-13T21:25:38.318790606Z" level=info msg="StartContainer for \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\" returns successfully" Jan 13 21:25:38.329417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:25:38.329664 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:38.329734 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:38.336681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:25:38.336998 systemd[1]: cri-containerd-ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4.scope: Deactivated successfully. Jan 13 21:25:38.358132 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:25:38.369996 containerd[1456]: time="2025-01-13T21:25:38.369919212Z" level=info msg="shim disconnected" id=ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4 namespace=k8s.io Jan 13 21:25:38.370132 containerd[1456]: time="2025-01-13T21:25:38.369996739Z" level=warning msg="cleaning up after shim disconnected" id=ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4 namespace=k8s.io Jan 13 21:25:38.370132 containerd[1456]: time="2025-01-13T21:25:38.370011136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:38.876620 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:39602.service - OpenSSH per-connection server daemon (10.0.0.1:39602). Jan 13 21:25:38.895730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4-rootfs.mount: Deactivated successfully. Jan 13 21:25:38.915254 sshd[3128]: Accepted publickey for core from 10.0.0.1 port 39602 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:38.917191 sshd[3128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:38.922061 systemd-logind[1442]: New session 10 of user core. Jan 13 21:25:38.929421 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:25:39.067725 kubelet[2592]: E0113 21:25:39.067068 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:39.071315 containerd[1456]: time="2025-01-13T21:25:39.070125594Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:25:39.120521 sshd[3128]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:39.124824 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:39602.service: Deactivated successfully. Jan 13 21:25:39.127058 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:25:39.127793 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:25:39.128725 systemd-logind[1442]: Removed session 10. Jan 13 21:25:39.149377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138572959.mount: Deactivated successfully. Jan 13 21:25:39.155892 containerd[1456]: time="2025-01-13T21:25:39.155843758Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\"" Jan 13 21:25:39.156878 containerd[1456]: time="2025-01-13T21:25:39.156515070Z" level=info msg="StartContainer for \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\"" Jan 13 21:25:39.191489 systemd[1]: Started cri-containerd-b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6.scope - libcontainer container b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6. Jan 13 21:25:39.223788 systemd[1]: cri-containerd-b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6.scope: Deactivated successfully. Jan 13 21:25:39.224333 containerd[1456]: time="2025-01-13T21:25:39.224196135Z" level=info msg="StartContainer for \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\" returns successfully" Jan 13 21:25:39.250691 containerd[1456]: time="2025-01-13T21:25:39.250618487Z" level=info msg="shim disconnected" id=b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6 namespace=k8s.io Jan 13 21:25:39.250691 containerd[1456]: time="2025-01-13T21:25:39.250686215Z" level=warning msg="cleaning up after shim disconnected" id=b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6 namespace=k8s.io Jan 13 21:25:39.250691 containerd[1456]: time="2025-01-13T21:25:39.250694711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:39.895453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6-rootfs.mount: Deactivated successfully. Jan 13 21:25:40.070172 kubelet[2592]: E0113 21:25:40.070124 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:40.072564 containerd[1456]: time="2025-01-13T21:25:40.072522375Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:25:40.087637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378295345.mount: Deactivated successfully. Jan 13 21:25:40.098871 containerd[1456]: time="2025-01-13T21:25:40.098818160Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\"" Jan 13 21:25:40.099532 containerd[1456]: time="2025-01-13T21:25:40.099443456Z" level=info msg="StartContainer for \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\"" Jan 13 21:25:40.129489 systemd[1]: Started cri-containerd-0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d.scope - libcontainer container 0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d. Jan 13 21:25:40.156323 systemd[1]: cri-containerd-0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d.scope: Deactivated successfully. Jan 13 21:25:40.158532 containerd[1456]: time="2025-01-13T21:25:40.158380802Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice/cri-containerd-0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d.scope/memory.events\": no such file or directory" Jan 13 21:25:40.162239 containerd[1456]: time="2025-01-13T21:25:40.162161502Z" level=info msg="StartContainer for \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\" returns successfully" Jan 13 21:25:40.216042 containerd[1456]: time="2025-01-13T21:25:40.215952859Z" level=info msg="shim disconnected" id=0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d namespace=k8s.io Jan 13 21:25:40.216042 containerd[1456]: time="2025-01-13T21:25:40.216037748Z" level=warning msg="cleaning up after shim disconnected" id=0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d namespace=k8s.io Jan 13 21:25:40.216042 containerd[1456]: time="2025-01-13T21:25:40.216049180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:25:40.418760 containerd[1456]: time="2025-01-13T21:25:40.418613349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:40.419420 containerd[1456]: time="2025-01-13T21:25:40.419351748Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906601" Jan 13 21:25:40.420640 containerd[1456]: time="2025-01-13T21:25:40.420611035Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:25:40.422329 containerd[1456]: time="2025-01-13T21:25:40.422288821Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.539544882s" Jan 13 21:25:40.422399 containerd[1456]: time="2025-01-13T21:25:40.422335920Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:25:40.424729 containerd[1456]: time="2025-01-13T21:25:40.424673696Z" level=info msg="CreateContainer within sandbox \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:25:40.439422 containerd[1456]: time="2025-01-13T21:25:40.439371198Z" level=info msg="CreateContainer within sandbox \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\"" Jan 13 21:25:40.439942 containerd[1456]: time="2025-01-13T21:25:40.439909040Z" level=info msg="StartContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\"" Jan 13 21:25:40.467677 systemd[1]: Started cri-containerd-06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82.scope - libcontainer container 06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82. Jan 13 21:25:40.648802 containerd[1456]: time="2025-01-13T21:25:40.648629938Z" level=info msg="StartContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" returns successfully" Jan 13 21:25:41.077439 kubelet[2592]: E0113 21:25:41.077393 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:41.079248 kubelet[2592]: E0113 21:25:41.079220 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:41.079991 containerd[1456]: time="2025-01-13T21:25:41.079939127Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:25:41.129016 containerd[1456]: time="2025-01-13T21:25:41.128955899Z" level=info msg="CreateContainer within sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\"" Jan 13 21:25:41.130348 containerd[1456]: time="2025-01-13T21:25:41.130094440Z" level=info msg="StartContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\"" Jan 13 21:25:41.146365 kubelet[2592]: I0113 21:25:41.146202 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-76ztr" podStartSLOduration=2.117391118 podStartE2EDuration="15.146177773s" podCreationTimestamp="2025-01-13 21:25:26 +0000 UTC" firstStartedPulling="2025-01-13 21:25:27.394457202 +0000 UTC m=+16.557794656" lastFinishedPulling="2025-01-13 21:25:40.423243848 +0000 UTC m=+29.586581311" observedRunningTime="2025-01-13 21:25:41.146058668 +0000 UTC m=+30.309396141" watchObservedRunningTime="2025-01-13 21:25:41.146177773 +0000 UTC m=+30.309515226" Jan 13 21:25:41.180451 systemd[1]: Started cri-containerd-c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5.scope - libcontainer container c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5. Jan 13 21:25:41.224017 containerd[1456]: time="2025-01-13T21:25:41.223957629Z" level=info msg="StartContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" returns successfully" Jan 13 21:25:41.452318 kubelet[2592]: I0113 21:25:41.452130 2592 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:25:41.697620 kubelet[2592]: I0113 21:25:41.697138 2592 topology_manager.go:215] "Topology Admit Handler" podUID="45b98a30-baa9-49f2-bd1f-510e0e864c50" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n894m" Jan 13 21:25:41.699587 kubelet[2592]: I0113 21:25:41.699570 2592 topology_manager.go:215] "Topology Admit Handler" podUID="33d9094e-b8ab-4ab8-be97-bdbf3930c72a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5j7jl" Jan 13 21:25:41.707130 systemd[1]: Created slice kubepods-burstable-pod45b98a30_baa9_49f2_bd1f_510e0e864c50.slice - libcontainer container kubepods-burstable-pod45b98a30_baa9_49f2_bd1f_510e0e864c50.slice. Jan 13 21:25:41.713455 systemd[1]: Created slice kubepods-burstable-pod33d9094e_b8ab_4ab8_be97_bdbf3930c72a.slice - libcontainer container kubepods-burstable-pod33d9094e_b8ab_4ab8_be97_bdbf3930c72a.slice. Jan 13 21:25:41.782831 kubelet[2592]: I0113 21:25:41.782640 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45b98a30-baa9-49f2-bd1f-510e0e864c50-config-volume\") pod \"coredns-7db6d8ff4d-n894m\" (UID: \"45b98a30-baa9-49f2-bd1f-510e0e864c50\") " pod="kube-system/coredns-7db6d8ff4d-n894m" Jan 13 21:25:41.782831 kubelet[2592]: I0113 21:25:41.782679 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33d9094e-b8ab-4ab8-be97-bdbf3930c72a-config-volume\") pod \"coredns-7db6d8ff4d-5j7jl\" (UID: \"33d9094e-b8ab-4ab8-be97-bdbf3930c72a\") " pod="kube-system/coredns-7db6d8ff4d-5j7jl" Jan 13 21:25:41.782831 kubelet[2592]: I0113 21:25:41.782697 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9bf\" (UniqueName: \"kubernetes.io/projected/33d9094e-b8ab-4ab8-be97-bdbf3930c72a-kube-api-access-pd9bf\") pod \"coredns-7db6d8ff4d-5j7jl\" (UID: \"33d9094e-b8ab-4ab8-be97-bdbf3930c72a\") " pod="kube-system/coredns-7db6d8ff4d-5j7jl" Jan 13 21:25:41.782831 kubelet[2592]: I0113 21:25:41.782777 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n98zf\" (UniqueName: \"kubernetes.io/projected/45b98a30-baa9-49f2-bd1f-510e0e864c50-kube-api-access-n98zf\") pod \"coredns-7db6d8ff4d-n894m\" (UID: \"45b98a30-baa9-49f2-bd1f-510e0e864c50\") " pod="kube-system/coredns-7db6d8ff4d-n894m" Jan 13 21:25:42.010934 kubelet[2592]: E0113 21:25:42.010804 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.011805 containerd[1456]: time="2025-01-13T21:25:42.011763131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n894m,Uid:45b98a30-baa9-49f2-bd1f-510e0e864c50,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:42.016377 kubelet[2592]: E0113 21:25:42.016332 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.016852 containerd[1456]: time="2025-01-13T21:25:42.016826901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5j7jl,Uid:33d9094e-b8ab-4ab8-be97-bdbf3930c72a,Namespace:kube-system,Attempt:0,}" Jan 13 21:25:42.084015 kubelet[2592]: E0113 21:25:42.083503 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:42.084488 kubelet[2592]: E0113 21:25:42.084436 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:43.085094 kubelet[2592]: E0113 21:25:43.085058 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:44.086964 kubelet[2592]: E0113 21:25:44.086915 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:44.133385 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:39616.service - OpenSSH per-connection server daemon (10.0.0.1:39616). Jan 13 21:25:44.173835 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 39616 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:44.175784 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:44.180711 systemd-logind[1442]: New session 11 of user core. Jan 13 21:25:44.191415 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:25:44.324320 sshd[3455]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:44.327947 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:39616.service: Deactivated successfully. Jan 13 21:25:44.329981 systemd-networkd[1391]: cilium_host: Link UP Jan 13 21:25:44.330175 systemd-networkd[1391]: cilium_net: Link UP Jan 13 21:25:44.330362 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:25:44.332269 systemd-networkd[1391]: cilium_net: Gained carrier Jan 13 21:25:44.332529 systemd-networkd[1391]: cilium_host: Gained carrier Jan 13 21:25:44.333978 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:25:44.335035 systemd-logind[1442]: Removed session 11. Jan 13 21:25:44.443630 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 13 21:25:44.444364 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 13 21:25:44.681355 kernel: NET: Registered PF_ALG protocol family Jan 13 21:25:44.746476 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 13 21:25:45.002413 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 13 21:25:45.411057 systemd-networkd[1391]: lxc_health: Link UP Jan 13 21:25:45.420442 systemd-networkd[1391]: lxc_health: Gained carrier Jan 13 21:25:45.589218 systemd-networkd[1391]: lxc34abbc7910c8: Link UP Jan 13 21:25:45.602297 kernel: eth0: renamed from tmp75b89 Jan 13 21:25:45.606026 systemd-networkd[1391]: lxca91ad3cf15a0: Link UP Jan 13 21:25:45.613309 systemd-networkd[1391]: lxc34abbc7910c8: Gained carrier Jan 13 21:25:45.615306 kernel: eth0: renamed from tmp1b5fe Jan 13 21:25:45.621368 systemd-networkd[1391]: lxca91ad3cf15a0: Gained carrier Jan 13 21:25:46.219502 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 13 21:25:46.764590 kubelet[2592]: E0113 21:25:46.764391 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:46.781707 kubelet[2592]: I0113 21:25:46.781615 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-njdgv" podStartSLOduration=10.736677569 podStartE2EDuration="20.781594726s" podCreationTimestamp="2025-01-13 21:25:26 +0000 UTC" firstStartedPulling="2025-01-13 21:25:26.837175125 +0000 UTC m=+16.000512578" lastFinishedPulling="2025-01-13 21:25:36.882092282 +0000 UTC m=+26.045429735" observedRunningTime="2025-01-13 21:25:42.100706071 +0000 UTC m=+31.264043534" watchObservedRunningTime="2025-01-13 21:25:46.781594726 +0000 UTC m=+35.944932179" Jan 13 21:25:47.050485 systemd-networkd[1391]: lxca91ad3cf15a0: Gained IPv6LL Jan 13 21:25:47.093522 kubelet[2592]: E0113 21:25:47.093471 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:47.242443 systemd-networkd[1391]: lxc34abbc7910c8: Gained IPv6LL Jan 13 21:25:47.308651 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 13 21:25:48.095661 kubelet[2592]: E0113 21:25:48.095620 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:49.341654 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:52064.service - OpenSSH per-connection server daemon (10.0.0.1:52064). Jan 13 21:25:49.378243 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 52064 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:49.380817 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:49.386074 systemd-logind[1442]: New session 12 of user core. Jan 13 21:25:49.395424 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:25:49.537126 sshd[3849]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:49.540699 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:25:49.541685 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:52064.service: Deactivated successfully. Jan 13 21:25:49.545403 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:25:49.548253 systemd-logind[1442]: Removed session 12. Jan 13 21:25:49.662197 containerd[1456]: time="2025-01-13T21:25:49.662038160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:49.662197 containerd[1456]: time="2025-01-13T21:25:49.662099726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:49.662197 containerd[1456]: time="2025-01-13T21:25:49.662111137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:49.662684 containerd[1456]: time="2025-01-13T21:25:49.662190106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:49.694577 systemd[1]: Started cri-containerd-1b5fe55074afe3557e698c38cbf3f8803408a6b24ad72c4d3909e84ea1de4fda.scope - libcontainer container 1b5fe55074afe3557e698c38cbf3f8803408a6b24ad72c4d3909e84ea1de4fda. Jan 13 21:25:49.697508 containerd[1456]: time="2025-01-13T21:25:49.697383973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:25:49.697508 containerd[1456]: time="2025-01-13T21:25:49.697434989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:25:49.697508 containerd[1456]: time="2025-01-13T21:25:49.697446380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:49.697634 containerd[1456]: time="2025-01-13T21:25:49.697518485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:25:49.708785 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:25:49.723069 systemd[1]: Started cri-containerd-75b897d061ba3587cee31e93470cb4acd7ad108db82e38118f440b72b1e134ba.scope - libcontainer container 75b897d061ba3587cee31e93470cb4acd7ad108db82e38118f440b72b1e134ba. Jan 13 21:25:49.734068 containerd[1456]: time="2025-01-13T21:25:49.733983690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5j7jl,Uid:33d9094e-b8ab-4ab8-be97-bdbf3930c72a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5fe55074afe3557e698c38cbf3f8803408a6b24ad72c4d3909e84ea1de4fda\"" Jan 13 21:25:49.734759 kubelet[2592]: E0113 21:25:49.734738 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:49.736353 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:25:49.737616 containerd[1456]: time="2025-01-13T21:25:49.737470459Z" level=info msg="CreateContainer within sandbox \"1b5fe55074afe3557e698c38cbf3f8803408a6b24ad72c4d3909e84ea1de4fda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:25:49.761194 containerd[1456]: time="2025-01-13T21:25:49.761145410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n894m,Uid:45b98a30-baa9-49f2-bd1f-510e0e864c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"75b897d061ba3587cee31e93470cb4acd7ad108db82e38118f440b72b1e134ba\"" Jan 13 21:25:49.761921 kubelet[2592]: E0113 21:25:49.761884 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:49.763634 containerd[1456]: time="2025-01-13T21:25:49.763594650Z" level=info msg="CreateContainer within sandbox \"75b897d061ba3587cee31e93470cb4acd7ad108db82e38118f440b72b1e134ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:25:49.815294 containerd[1456]: time="2025-01-13T21:25:49.815213158Z" level=info msg="CreateContainer within sandbox \"1b5fe55074afe3557e698c38cbf3f8803408a6b24ad72c4d3909e84ea1de4fda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d35dafd7cb4f55769d3216024734e84f0a1c5bc1bb485eb26662fb5df7de6dd0\"" Jan 13 21:25:49.815893 containerd[1456]: time="2025-01-13T21:25:49.815851277Z" level=info msg="StartContainer for \"d35dafd7cb4f55769d3216024734e84f0a1c5bc1bb485eb26662fb5df7de6dd0\"" Jan 13 21:25:49.827856 containerd[1456]: time="2025-01-13T21:25:49.827812004Z" level=info msg="CreateContainer within sandbox \"75b897d061ba3587cee31e93470cb4acd7ad108db82e38118f440b72b1e134ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8a28cdab5c9db598e964db0aebd2bd18ff009a64ba3671a9a8abb103812f91c\"" Jan 13 21:25:49.828639 containerd[1456]: time="2025-01-13T21:25:49.828609783Z" level=info msg="StartContainer for \"c8a28cdab5c9db598e964db0aebd2bd18ff009a64ba3671a9a8abb103812f91c\"" Jan 13 21:25:49.845398 systemd[1]: Started cri-containerd-d35dafd7cb4f55769d3216024734e84f0a1c5bc1bb485eb26662fb5df7de6dd0.scope - libcontainer container d35dafd7cb4f55769d3216024734e84f0a1c5bc1bb485eb26662fb5df7de6dd0. Jan 13 21:25:49.863438 systemd[1]: Started cri-containerd-c8a28cdab5c9db598e964db0aebd2bd18ff009a64ba3671a9a8abb103812f91c.scope - libcontainer container c8a28cdab5c9db598e964db0aebd2bd18ff009a64ba3671a9a8abb103812f91c. Jan 13 21:25:49.884360 containerd[1456]: time="2025-01-13T21:25:49.884310228Z" level=info msg="StartContainer for \"d35dafd7cb4f55769d3216024734e84f0a1c5bc1bb485eb26662fb5df7de6dd0\" returns successfully" Jan 13 21:25:49.898697 containerd[1456]: time="2025-01-13T21:25:49.898653390Z" level=info msg="StartContainer for \"c8a28cdab5c9db598e964db0aebd2bd18ff009a64ba3671a9a8abb103812f91c\" returns successfully" Jan 13 21:25:50.102705 kubelet[2592]: E0113 21:25:50.102599 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:50.105681 kubelet[2592]: E0113 21:25:50.105658 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:50.202141 kubelet[2592]: I0113 21:25:50.201799 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5j7jl" podStartSLOduration=24.201776908 podStartE2EDuration="24.201776908s" podCreationTimestamp="2025-01-13 21:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:50.132268379 +0000 UTC m=+39.295605852" watchObservedRunningTime="2025-01-13 21:25:50.201776908 +0000 UTC m=+39.365114361" Jan 13 21:25:50.245581 kubelet[2592]: I0113 21:25:50.245506 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n894m" podStartSLOduration=24.245484278 podStartE2EDuration="24.245484278s" podCreationTimestamp="2025-01-13 21:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:25:50.245094035 +0000 UTC m=+39.408431518" watchObservedRunningTime="2025-01-13 21:25:50.245484278 +0000 UTC m=+39.408821741" Jan 13 21:25:51.108049 kubelet[2592]: E0113 21:25:51.107990 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:51.108049 kubelet[2592]: E0113 21:25:51.107987 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.109245 kubelet[2592]: E0113 21:25:52.109211 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:52.109678 kubelet[2592]: E0113 21:25:52.109414 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:25:54.548155 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:52072.service - OpenSSH per-connection server daemon (10.0.0.1:52072). Jan 13 21:25:54.583720 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 52072 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:54.585408 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:54.589441 systemd-logind[1442]: New session 13 of user core. Jan 13 21:25:54.601435 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:25:54.707222 sshd[4037]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:54.720095 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:52072.service: Deactivated successfully. Jan 13 21:25:54.722547 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:25:54.724187 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:25:54.729605 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:52088.service - OpenSSH per-connection server daemon (10.0.0.1:52088). Jan 13 21:25:54.730715 systemd-logind[1442]: Removed session 13. Jan 13 21:25:54.757847 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 52088 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:54.759431 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:54.763406 systemd-logind[1442]: New session 14 of user core. Jan 13 21:25:54.772456 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:25:54.910422 sshd[4053]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:54.922470 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:52088.service: Deactivated successfully. Jan 13 21:25:54.924795 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:25:54.928374 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:25:54.942117 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:52090.service - OpenSSH per-connection server daemon (10.0.0.1:52090). Jan 13 21:25:54.944992 systemd-logind[1442]: Removed session 14. Jan 13 21:25:54.971186 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 52090 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:25:54.972743 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:25:54.976769 systemd-logind[1442]: New session 15 of user core. Jan 13 21:25:54.983397 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:25:55.086063 sshd[4065]: pam_unix(sshd:session): session closed for user core Jan 13 21:25:55.090380 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:52090.service: Deactivated successfully. Jan 13 21:25:55.092449 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:25:55.093113 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:25:55.094123 systemd-logind[1442]: Removed session 15. Jan 13 21:26:00.100288 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:53102.service - OpenSSH per-connection server daemon (10.0.0.1:53102). Jan 13 21:26:00.131626 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 53102 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:00.133199 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:00.137127 systemd-logind[1442]: New session 16 of user core. Jan 13 21:26:00.146423 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:26:00.249194 sshd[4082]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:00.253207 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:53102.service: Deactivated successfully. Jan 13 21:26:00.255496 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:26:00.256239 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:26:00.257125 systemd-logind[1442]: Removed session 16. Jan 13 21:26:05.261704 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Jan 13 21:26:05.293634 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:05.295122 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:05.298991 systemd-logind[1442]: New session 17 of user core. Jan 13 21:26:05.305420 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:26:05.416661 sshd[4096]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:05.421041 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:53114.service: Deactivated successfully. Jan 13 21:26:05.422927 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:26:05.423685 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:26:05.424499 systemd-logind[1442]: Removed session 17. Jan 13 21:26:10.430493 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:47324.service - OpenSSH per-connection server daemon (10.0.0.1:47324). Jan 13 21:26:10.462989 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:10.465011 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:10.469455 systemd-logind[1442]: New session 18 of user core. Jan 13 21:26:10.478496 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:26:10.593651 sshd[4110]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:10.609868 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:47324.service: Deactivated successfully. Jan 13 21:26:10.611798 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:26:10.613508 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:26:10.614876 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:47332.service - OpenSSH per-connection server daemon (10.0.0.1:47332). Jan 13 21:26:10.615902 systemd-logind[1442]: Removed session 18. Jan 13 21:26:10.649077 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 47332 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:10.650542 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:10.654375 systemd-logind[1442]: New session 19 of user core. Jan 13 21:26:10.665429 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:26:10.955424 sshd[4124]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:10.972648 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:47332.service: Deactivated successfully. Jan 13 21:26:10.974893 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:26:10.977110 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:26:10.986765 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:47346.service - OpenSSH per-connection server daemon (10.0.0.1:47346). Jan 13 21:26:10.987917 systemd-logind[1442]: Removed session 19. Jan 13 21:26:11.019115 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 47346 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:11.020938 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:11.025607 systemd-logind[1442]: New session 20 of user core. Jan 13 21:26:11.035460 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:26:12.554801 sshd[4138]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:12.568009 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:47346.service: Deactivated successfully. Jan 13 21:26:12.570749 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:26:12.572617 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:26:12.579698 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:47350.service - OpenSSH per-connection server daemon (10.0.0.1:47350). Jan 13 21:26:12.581533 systemd-logind[1442]: Removed session 20. Jan 13 21:26:12.612969 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 47350 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:12.614688 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:12.618801 systemd-logind[1442]: New session 21 of user core. Jan 13 21:26:12.625447 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:26:12.852886 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:12.861965 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:47350.service: Deactivated successfully. Jan 13 21:26:12.863848 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:26:12.865612 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:26:12.873570 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jan 13 21:26:12.874579 systemd-logind[1442]: Removed session 21. Jan 13 21:26:12.901481 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:12.903113 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:12.907752 systemd-logind[1442]: New session 22 of user core. Jan 13 21:26:12.919585 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:26:13.033393 sshd[4171]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:13.037441 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:47366.service: Deactivated successfully. Jan 13 21:26:13.039366 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:26:13.040159 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:26:13.041081 systemd-logind[1442]: Removed session 22. Jan 13 21:26:18.045634 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:38690.service - OpenSSH per-connection server daemon (10.0.0.1:38690). Jan 13 21:26:18.078460 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 38690 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:18.080043 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:18.084708 systemd-logind[1442]: New session 23 of user core. Jan 13 21:26:18.095395 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:26:18.196819 sshd[4185]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:18.200917 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:38690.service: Deactivated successfully. Jan 13 21:26:18.202964 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:26:18.203766 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:26:18.204587 systemd-logind[1442]: Removed session 23. Jan 13 21:26:21.907696 kubelet[2592]: E0113 21:26:21.907630 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:23.210582 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:38700.service - OpenSSH per-connection server daemon (10.0.0.1:38700). Jan 13 21:26:23.243965 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 38700 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:23.245553 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:23.249623 systemd-logind[1442]: New session 24 of user core. Jan 13 21:26:23.259430 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:26:23.366701 sshd[4202]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:23.371095 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:38700.service: Deactivated successfully. Jan 13 21:26:23.373192 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:26:23.374123 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:26:23.375170 systemd-logind[1442]: Removed session 24. Jan 13 21:26:23.907358 kubelet[2592]: E0113 21:26:23.907316 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:28.380694 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:48834.service - OpenSSH per-connection server daemon (10.0.0.1:48834). Jan 13 21:26:28.413346 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 48834 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:28.415373 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:28.420175 systemd-logind[1442]: New session 25 of user core. Jan 13 21:26:28.427451 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:26:28.544156 sshd[4218]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:28.548635 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:48834.service: Deactivated successfully. Jan 13 21:26:28.551398 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:26:28.552258 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:26:28.553305 systemd-logind[1442]: Removed session 25. Jan 13 21:26:33.557037 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:48844.service - OpenSSH per-connection server daemon (10.0.0.1:48844). Jan 13 21:26:33.589809 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 48844 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:33.591414 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:33.595391 systemd-logind[1442]: New session 26 of user core. Jan 13 21:26:33.609420 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:26:33.711297 sshd[4232]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:33.719257 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:48844.service: Deactivated successfully. Jan 13 21:26:33.721073 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:26:33.722556 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:26:33.723961 systemd[1]: Started sshd@26-10.0.0.97:22-10.0.0.1:48860.service - OpenSSH per-connection server daemon (10.0.0.1:48860). Jan 13 21:26:33.724851 systemd-logind[1442]: Removed session 26. Jan 13 21:26:33.771158 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 48860 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:33.772838 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:33.777121 systemd-logind[1442]: New session 27 of user core. Jan 13 21:26:33.785414 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:26:35.230247 containerd[1456]: time="2025-01-13T21:26:35.230073693Z" level=info msg="StopContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" with timeout 30 (s)" Jan 13 21:26:35.231852 containerd[1456]: time="2025-01-13T21:26:35.231812057Z" level=info msg="Stop container \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" with signal terminated" Jan 13 21:26:35.263604 systemd[1]: cri-containerd-06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82.scope: Deactivated successfully. Jan 13 21:26:35.292137 containerd[1456]: time="2025-01-13T21:26:35.292074149Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:26:35.292537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82-rootfs.mount: Deactivated successfully. Jan 13 21:26:35.292828 containerd[1456]: time="2025-01-13T21:26:35.292533534Z" level=info msg="StopContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" with timeout 2 (s)" Jan 13 21:26:35.293361 containerd[1456]: time="2025-01-13T21:26:35.292855498Z" level=info msg="Stop container \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" with signal terminated" Jan 13 21:26:35.299491 containerd[1456]: time="2025-01-13T21:26:35.299400646Z" level=info msg="shim disconnected" id=06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82 namespace=k8s.io Jan 13 21:26:35.299491 containerd[1456]: time="2025-01-13T21:26:35.299466712Z" level=warning msg="cleaning up after shim disconnected" id=06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82 namespace=k8s.io Jan 13 21:26:35.299491 containerd[1456]: time="2025-01-13T21:26:35.299474939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.300037 systemd-networkd[1391]: lxc_health: Link DOWN Jan 13 21:26:35.300148 systemd-networkd[1391]: lxc_health: Lost carrier Jan 13 21:26:35.321095 containerd[1456]: time="2025-01-13T21:26:35.321029944Z" level=info msg="StopContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" returns successfully" Jan 13 21:26:35.322138 containerd[1456]: time="2025-01-13T21:26:35.321895734Z" level=info msg="StopPodSandbox for \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\"" Jan 13 21:26:35.322138 containerd[1456]: time="2025-01-13T21:26:35.321950719Z" level=info msg="Container to stop \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.324172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc-shm.mount: Deactivated successfully. Jan 13 21:26:35.326749 systemd[1]: cri-containerd-c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5.scope: Deactivated successfully. Jan 13 21:26:35.327069 systemd[1]: cri-containerd-c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5.scope: Consumed 7.586s CPU time. Jan 13 21:26:35.340534 systemd[1]: cri-containerd-b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc.scope: Deactivated successfully. Jan 13 21:26:35.349368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5-rootfs.mount: Deactivated successfully. Jan 13 21:26:35.359057 containerd[1456]: time="2025-01-13T21:26:35.358811385Z" level=info msg="shim disconnected" id=c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5 namespace=k8s.io Jan 13 21:26:35.359057 containerd[1456]: time="2025-01-13T21:26:35.358884835Z" level=warning msg="cleaning up after shim disconnected" id=c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5 namespace=k8s.io Jan 13 21:26:35.359057 containerd[1456]: time="2025-01-13T21:26:35.358896017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.362600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc-rootfs.mount: Deactivated successfully. Jan 13 21:26:35.367844 containerd[1456]: time="2025-01-13T21:26:35.367766399Z" level=info msg="shim disconnected" id=b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc namespace=k8s.io Jan 13 21:26:35.367844 containerd[1456]: time="2025-01-13T21:26:35.367839337Z" level=warning msg="cleaning up after shim disconnected" id=b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc namespace=k8s.io Jan 13 21:26:35.368130 containerd[1456]: time="2025-01-13T21:26:35.367852322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.379669 containerd[1456]: time="2025-01-13T21:26:35.379615790Z" level=info msg="StopContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" returns successfully" Jan 13 21:26:35.380309 containerd[1456]: time="2025-01-13T21:26:35.380159206Z" level=info msg="StopPodSandbox for \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\"" Jan 13 21:26:35.380309 containerd[1456]: time="2025-01-13T21:26:35.380198901Z" level=info msg="Container to stop \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.380309 containerd[1456]: time="2025-01-13T21:26:35.380214181Z" level=info msg="Container to stop \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.380309 containerd[1456]: time="2025-01-13T21:26:35.380227105Z" level=info msg="Container to stop \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.380309 containerd[1456]: time="2025-01-13T21:26:35.380240150Z" level=info msg="Container to stop \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.380473 containerd[1456]: time="2025-01-13T21:26:35.380338277Z" level=info msg="Container to stop \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:26:35.388867 systemd[1]: cri-containerd-2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a.scope: Deactivated successfully. Jan 13 21:26:35.394866 containerd[1456]: time="2025-01-13T21:26:35.394811501Z" level=info msg="TearDown network for sandbox \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\" successfully" Jan 13 21:26:35.394866 containerd[1456]: time="2025-01-13T21:26:35.394854653Z" level=info msg="StopPodSandbox for \"b6fb409d4b59cead6fefb4864cdef2b9f6daea01db1ed067cd2f0bddc4bed6cc\" returns successfully" Jan 13 21:26:35.413982 containerd[1456]: time="2025-01-13T21:26:35.413891657Z" level=info msg="shim disconnected" id=2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a namespace=k8s.io Jan 13 21:26:35.413982 containerd[1456]: time="2025-01-13T21:26:35.413974886Z" level=warning msg="cleaning up after shim disconnected" id=2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a namespace=k8s.io Jan 13 21:26:35.414227 containerd[1456]: time="2025-01-13T21:26:35.413994304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:35.443968 containerd[1456]: time="2025-01-13T21:26:35.443905019Z" level=info msg="TearDown network for sandbox \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" successfully" Jan 13 21:26:35.443968 containerd[1456]: time="2025-01-13T21:26:35.443953301Z" level=info msg="StopPodSandbox for \"2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a\" returns successfully" Jan 13 21:26:35.571165 kubelet[2592]: I0113 21:26:35.571125 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-bpf-maps\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571165 kubelet[2592]: I0113 21:26:35.571164 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt9pf\" (UniqueName: \"kubernetes.io/projected/ddd16cbf-ae90-4f59-8bc9-9d985298a540-kube-api-access-vt9pf\") pod \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\" (UID: \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571185 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-clustermesh-secrets\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571199 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-kernel\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571214 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-config-path\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571230 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddd16cbf-ae90-4f59-8bc9-9d985298a540-cilium-config-path\") pod \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\" (UID: \"ddd16cbf-ae90-4f59-8bc9-9d985298a540\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571244 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-lib-modules\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571701 kubelet[2592]: I0113 21:26:35.571258 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-net\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571286 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-run\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571302 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hubble-tls\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571314 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-etc-cni-netd\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571299 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571358 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cni-path" (OuterVolumeSpecName: "cni-path") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.571853 kubelet[2592]: I0113 21:26:35.571330 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cni-path\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571433 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-cgroup\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571459 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hostproc\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571488 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk6r5\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-kube-api-access-nk6r5\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571511 2592 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-xtables-lock\") pod \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\" (UID: \"70c4a7ea-eb2f-4033-9c80-57a860d03e2f\") " Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571565 2592 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.571999 kubelet[2592]: I0113 21:26:35.571579 2592 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.572136 kubelet[2592]: I0113 21:26:35.571606 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.572136 kubelet[2592]: I0113 21:26:35.571615 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.572136 kubelet[2592]: I0113 21:26:35.571633 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.572136 kubelet[2592]: I0113 21:26:35.571640 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.572136 kubelet[2592]: I0113 21:26:35.571656 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hostproc" (OuterVolumeSpecName: "hostproc") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.574633 kubelet[2592]: I0113 21:26:35.574605 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.574795 kubelet[2592]: I0113 21:26:35.574639 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.575088 kubelet[2592]: I0113 21:26:35.574991 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:26:35.575088 kubelet[2592]: I0113 21:26:35.575066 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:26:35.575569 kubelet[2592]: I0113 21:26:35.575550 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-kube-api-access-nk6r5" (OuterVolumeSpecName: "kube-api-access-nk6r5") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "kube-api-access-nk6r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:35.575846 kubelet[2592]: I0113 21:26:35.575772 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddd16cbf-ae90-4f59-8bc9-9d985298a540-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ddd16cbf-ae90-4f59-8bc9-9d985298a540" (UID: "ddd16cbf-ae90-4f59-8bc9-9d985298a540"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:26:35.576870 kubelet[2592]: I0113 21:26:35.576827 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd16cbf-ae90-4f59-8bc9-9d985298a540-kube-api-access-vt9pf" (OuterVolumeSpecName: "kube-api-access-vt9pf") pod "ddd16cbf-ae90-4f59-8bc9-9d985298a540" (UID: "ddd16cbf-ae90-4f59-8bc9-9d985298a540"). InnerVolumeSpecName "kube-api-access-vt9pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:35.577368 kubelet[2592]: I0113 21:26:35.577222 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:26:35.578832 kubelet[2592]: I0113 21:26:35.578803 2592 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70c4a7ea-eb2f-4033-9c80-57a860d03e2f" (UID: "70c4a7ea-eb2f-4033-9c80-57a860d03e2f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672260 2592 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddd16cbf-ae90-4f59-8bc9-9d985298a540-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672308 2592 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672319 2592 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672327 2592 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672335 2592 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672323 kubelet[2592]: I0113 21:26:35.672343 2592 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672354 2592 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672362 2592 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672370 2592 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672379 2592 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nk6r5\" (UniqueName: \"kubernetes.io/projected/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-kube-api-access-nk6r5\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672387 2592 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672395 2592 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672403 2592 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70c4a7ea-eb2f-4033-9c80-57a860d03e2f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.672597 kubelet[2592]: I0113 21:26:35.672411 2592 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vt9pf\" (UniqueName: \"kubernetes.io/projected/ddd16cbf-ae90-4f59-8bc9-9d985298a540-kube-api-access-vt9pf\") on node \"localhost\" DevicePath \"\"" Jan 13 21:26:35.958672 kubelet[2592]: E0113 21:26:35.958537 2592 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:26:36.190989 kubelet[2592]: I0113 21:26:36.190958 2592 scope.go:117] "RemoveContainer" containerID="c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5" Jan 13 21:26:36.191945 containerd[1456]: time="2025-01-13T21:26:36.191898829Z" level=info msg="RemoveContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\"" Jan 13 21:26:36.197330 systemd[1]: Removed slice kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice - libcontainer container kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice. Jan 13 21:26:36.197448 systemd[1]: kubepods-burstable-pod70c4a7ea_eb2f_4033_9c80_57a860d03e2f.slice: Consumed 7.688s CPU time. Jan 13 21:26:36.201719 systemd[1]: Removed slice kubepods-besteffort-podddd16cbf_ae90_4f59_8bc9_9d985298a540.slice - libcontainer container kubepods-besteffort-podddd16cbf_ae90_4f59_8bc9_9d985298a540.slice. Jan 13 21:26:36.267129 systemd[1]: var-lib-kubelet-pods-ddd16cbf\x2dae90\x2d4f59\x2d8bc9\x2d9d985298a540-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvt9pf.mount: Deactivated successfully. Jan 13 21:26:36.267291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a-rootfs.mount: Deactivated successfully. Jan 13 21:26:36.267390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bcfce5dc124f1fd7015dc7a12bd6731a281de6fc0099fb4faf3a9065671051a-shm.mount: Deactivated successfully. Jan 13 21:26:36.267496 systemd[1]: var-lib-kubelet-pods-70c4a7ea\x2deb2f\x2d4033\x2d9c80\x2d57a860d03e2f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnk6r5.mount: Deactivated successfully. Jan 13 21:26:36.267597 systemd[1]: var-lib-kubelet-pods-70c4a7ea\x2deb2f\x2d4033\x2d9c80\x2d57a860d03e2f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:26:36.267695 systemd[1]: var-lib-kubelet-pods-70c4a7ea\x2deb2f\x2d4033\x2d9c80\x2d57a860d03e2f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:26:36.269517 containerd[1456]: time="2025-01-13T21:26:36.269471684Z" level=info msg="RemoveContainer for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" returns successfully" Jan 13 21:26:36.269915 kubelet[2592]: I0113 21:26:36.269855 2592 scope.go:117] "RemoveContainer" containerID="0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d" Jan 13 21:26:36.270961 containerd[1456]: time="2025-01-13T21:26:36.270934803Z" level=info msg="RemoveContainer for \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\"" Jan 13 21:26:36.336629 containerd[1456]: time="2025-01-13T21:26:36.336566542Z" level=info msg="RemoveContainer for \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\" returns successfully" Jan 13 21:26:36.336870 kubelet[2592]: I0113 21:26:36.336835 2592 scope.go:117] "RemoveContainer" containerID="b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6" Jan 13 21:26:36.337993 containerd[1456]: time="2025-01-13T21:26:36.337957262Z" level=info msg="RemoveContainer for \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\"" Jan 13 21:26:36.419466 containerd[1456]: time="2025-01-13T21:26:36.419398481Z" level=info msg="RemoveContainer for \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\" returns successfully" Jan 13 21:26:36.419761 kubelet[2592]: I0113 21:26:36.419696 2592 scope.go:117] "RemoveContainer" containerID="ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4" Jan 13 21:26:36.421732 containerd[1456]: time="2025-01-13T21:26:36.421358857Z" level=info msg="RemoveContainer for \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\"" Jan 13 21:26:36.479668 containerd[1456]: time="2025-01-13T21:26:36.479617745Z" level=info msg="RemoveContainer for \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\" returns successfully" Jan 13 21:26:36.479950 kubelet[2592]: I0113 21:26:36.479907 2592 scope.go:117] "RemoveContainer" containerID="fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32" Jan 13 21:26:36.480955 containerd[1456]: time="2025-01-13T21:26:36.480910790Z" level=info msg="RemoveContainer for \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\"" Jan 13 21:26:36.500679 containerd[1456]: time="2025-01-13T21:26:36.500633457Z" level=info msg="RemoveContainer for \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\" returns successfully" Jan 13 21:26:36.500833 kubelet[2592]: I0113 21:26:36.500804 2592 scope.go:117] "RemoveContainer" containerID="c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5" Jan 13 21:26:36.503702 containerd[1456]: time="2025-01-13T21:26:36.503635007Z" level=error msg="ContainerStatus for \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\": not found" Jan 13 21:26:36.503856 kubelet[2592]: E0113 21:26:36.503823 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\": not found" containerID="c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5" Jan 13 21:26:36.503937 kubelet[2592]: I0113 21:26:36.503862 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5"} err="failed to get container status \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9ae9162b3ce5689ceff1ac3f9a43a414c90a9dfe5ddaba444d92631edc612b5\": not found" Jan 13 21:26:36.503976 kubelet[2592]: I0113 21:26:36.503936 2592 scope.go:117] "RemoveContainer" containerID="0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d" Jan 13 21:26:36.504143 containerd[1456]: time="2025-01-13T21:26:36.504106194Z" level=error msg="ContainerStatus for \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\": not found" Jan 13 21:26:36.504244 kubelet[2592]: E0113 21:26:36.504221 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\": not found" containerID="0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d" Jan 13 21:26:36.504244 kubelet[2592]: I0113 21:26:36.504238 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d"} err="failed to get container status \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ab753ebec55fbd41864a91339588ca8fa796415ff8046a07a984d247b1f9c4d\": not found" Jan 13 21:26:36.504352 kubelet[2592]: I0113 21:26:36.504259 2592 scope.go:117] "RemoveContainer" containerID="b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6" Jan 13 21:26:36.504529 containerd[1456]: time="2025-01-13T21:26:36.504495627Z" level=error msg="ContainerStatus for \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\": not found" Jan 13 21:26:36.504635 kubelet[2592]: E0113 21:26:36.504593 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\": not found" containerID="b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6" Jan 13 21:26:36.504674 kubelet[2592]: I0113 21:26:36.504637 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6"} err="failed to get container status \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4a123534772af74e25bece5a7cfafe1d25dc98064b1838e4f1517d5b011c2b6\": not found" Jan 13 21:26:36.504674 kubelet[2592]: I0113 21:26:36.504648 2592 scope.go:117] "RemoveContainer" containerID="ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4" Jan 13 21:26:36.504827 containerd[1456]: time="2025-01-13T21:26:36.504800948Z" level=error msg="ContainerStatus for \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\": not found" Jan 13 21:26:36.504914 kubelet[2592]: E0113 21:26:36.504888 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\": not found" containerID="ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4" Jan 13 21:26:36.504947 kubelet[2592]: I0113 21:26:36.504917 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4"} err="failed to get container status \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ddaec9fcdfdb985017994f3e57fbceaca9657d6da85ae94e4fb88e8bf59028a4\": not found" Jan 13 21:26:36.504947 kubelet[2592]: I0113 21:26:36.504931 2592 scope.go:117] "RemoveContainer" containerID="fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32" Jan 13 21:26:36.505103 containerd[1456]: time="2025-01-13T21:26:36.505078247Z" level=error msg="ContainerStatus for \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\": not found" Jan 13 21:26:36.505215 kubelet[2592]: E0113 21:26:36.505191 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\": not found" containerID="fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32" Jan 13 21:26:36.505215 kubelet[2592]: I0113 21:26:36.505210 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32"} err="failed to get container status \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd13fce245661f88958be8e5905a2fd902fd0b228f6208fd8e0d483406e88a32\": not found" Jan 13 21:26:36.505320 kubelet[2592]: I0113 21:26:36.505222 2592 scope.go:117] "RemoveContainer" containerID="06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82" Jan 13 21:26:36.506022 containerd[1456]: time="2025-01-13T21:26:36.506000886Z" level=info msg="RemoveContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\"" Jan 13 21:26:36.509871 containerd[1456]: time="2025-01-13T21:26:36.509843348Z" level=info msg="RemoveContainer for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" returns successfully" Jan 13 21:26:36.510028 kubelet[2592]: I0113 21:26:36.509974 2592 scope.go:117] "RemoveContainer" containerID="06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82" Jan 13 21:26:36.516831 containerd[1456]: time="2025-01-13T21:26:36.516797702Z" level=error msg="ContainerStatus for \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\": not found" Jan 13 21:26:36.516935 kubelet[2592]: E0113 21:26:36.516912 2592 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\": not found" containerID="06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82" Jan 13 21:26:36.516980 kubelet[2592]: I0113 21:26:36.516937 2592 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82"} err="failed to get container status \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\": rpc error: code = NotFound desc = an error occurred when try to find container \"06a69d079013e21911132db013cdcb77fcd660ad2d6fe926d59f33d715812c82\": not found" Jan 13 21:26:36.909251 kubelet[2592]: I0113 21:26:36.909208 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" path="/var/lib/kubelet/pods/70c4a7ea-eb2f-4033-9c80-57a860d03e2f/volumes" Jan 13 21:26:36.910148 kubelet[2592]: I0113 21:26:36.910125 2592 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd16cbf-ae90-4f59-8bc9-9d985298a540" path="/var/lib/kubelet/pods/ddd16cbf-ae90-4f59-8bc9-9d985298a540/volumes" Jan 13 21:26:37.196226 sshd[4247]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:37.203219 systemd[1]: sshd@26-10.0.0.97:22-10.0.0.1:48860.service: Deactivated successfully. Jan 13 21:26:37.205145 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:26:37.206618 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:26:37.220566 systemd[1]: Started sshd@27-10.0.0.97:22-10.0.0.1:48862.service - OpenSSH per-connection server daemon (10.0.0.1:48862). Jan 13 21:26:37.221520 systemd-logind[1442]: Removed session 27. Jan 13 21:26:37.252732 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 48862 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:37.254456 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:37.258665 systemd-logind[1442]: New session 28 of user core. Jan 13 21:26:37.270520 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:26:37.765467 sshd[4409]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:37.776846 kubelet[2592]: I0113 21:26:37.776012 2592 topology_manager.go:215] "Topology Admit Handler" podUID="f688e3a1-1758-447a-b733-1591824eee6e" podNamespace="kube-system" podName="cilium-r6m4v" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776083 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="mount-cgroup" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776096 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="apply-sysctl-overwrites" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776104 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="clean-cilium-state" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776111 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddd16cbf-ae90-4f59-8bc9-9d985298a540" containerName="cilium-operator" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776119 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="cilium-agent" Jan 13 21:26:37.776846 kubelet[2592]: E0113 21:26:37.776128 2592 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="mount-bpf-fs" Jan 13 21:26:37.776846 kubelet[2592]: I0113 21:26:37.776154 2592 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd16cbf-ae90-4f59-8bc9-9d985298a540" containerName="cilium-operator" Jan 13 21:26:37.776846 kubelet[2592]: I0113 21:26:37.776164 2592 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c4a7ea-eb2f-4033-9c80-57a860d03e2f" containerName="cilium-agent" Jan 13 21:26:37.776562 systemd[1]: sshd@27-10.0.0.97:22-10.0.0.1:48862.service: Deactivated successfully. Jan 13 21:26:37.780856 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:26:37.783620 systemd-logind[1442]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:26:37.796020 systemd[1]: Started sshd@28-10.0.0.97:22-10.0.0.1:57268.service - OpenSSH per-connection server daemon (10.0.0.1:57268). Jan 13 21:26:37.800700 systemd-logind[1442]: Removed session 28. Jan 13 21:26:37.807688 systemd[1]: Created slice kubepods-burstable-podf688e3a1_1758_447a_b733_1591824eee6e.slice - libcontainer container kubepods-burstable-podf688e3a1_1758_447a_b733_1591824eee6e.slice. Jan 13 21:26:37.825358 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:37.826918 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:37.830799 systemd-logind[1442]: New session 29 of user core. Jan 13 21:26:37.840433 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:26:37.885333 kubelet[2592]: I0113 21:26:37.885293 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-bpf-maps\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885333 kubelet[2592]: I0113 21:26:37.885332 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-cilium-cgroup\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885353 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-host-proc-sys-net\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885368 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79fck\" (UniqueName: \"kubernetes.io/projected/f688e3a1-1758-447a-b733-1591824eee6e-kube-api-access-79fck\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885382 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-hostproc\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885396 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f688e3a1-1758-447a-b733-1591824eee6e-clustermesh-secrets\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885409 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f688e3a1-1758-447a-b733-1591824eee6e-hubble-tls\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885486 kubelet[2592]: I0113 21:26:37.885423 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-cilium-run\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885449 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-host-proc-sys-kernel\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885463 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f688e3a1-1758-447a-b733-1591824eee6e-cilium-ipsec-secrets\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885477 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-lib-modules\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885491 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f688e3a1-1758-447a-b733-1591824eee6e-cilium-config-path\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885503 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-cni-path\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885676 kubelet[2592]: I0113 21:26:37.885514 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-etc-cni-netd\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.885861 kubelet[2592]: I0113 21:26:37.885529 2592 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f688e3a1-1758-447a-b733-1591824eee6e-xtables-lock\") pod \"cilium-r6m4v\" (UID: \"f688e3a1-1758-447a-b733-1591824eee6e\") " pod="kube-system/cilium-r6m4v" Jan 13 21:26:37.891540 sshd[4423]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:37.901927 systemd[1]: sshd@28-10.0.0.97:22-10.0.0.1:57268.service: Deactivated successfully. Jan 13 21:26:37.903512 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:26:37.904903 systemd-logind[1442]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:26:37.910497 systemd[1]: Started sshd@29-10.0.0.97:22-10.0.0.1:57276.service - OpenSSH per-connection server daemon (10.0.0.1:57276). Jan 13 21:26:37.911517 systemd-logind[1442]: Removed session 29. Jan 13 21:26:37.937440 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 57276 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:26:37.938808 sshd[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:26:37.942672 systemd-logind[1442]: New session 30 of user core. Jan 13 21:26:37.953388 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:26:38.112117 kubelet[2592]: E0113 21:26:38.112071 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.113265 containerd[1456]: time="2025-01-13T21:26:38.112754676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6m4v,Uid:f688e3a1-1758-447a-b733-1591824eee6e,Namespace:kube-system,Attempt:0,}" Jan 13 21:26:38.360253 containerd[1456]: time="2025-01-13T21:26:38.360117343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:26:38.360253 containerd[1456]: time="2025-01-13T21:26:38.360185232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:26:38.360253 containerd[1456]: time="2025-01-13T21:26:38.360199270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.360881 containerd[1456]: time="2025-01-13T21:26:38.360797929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:26:38.386451 systemd[1]: Started cri-containerd-e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba.scope - libcontainer container e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba. Jan 13 21:26:38.407555 containerd[1456]: time="2025-01-13T21:26:38.407506657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r6m4v,Uid:f688e3a1-1758-447a-b733-1591824eee6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\"" Jan 13 21:26:38.408267 kubelet[2592]: E0113 21:26:38.408231 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:38.410466 containerd[1456]: time="2025-01-13T21:26:38.410412260Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:26:38.468607 containerd[1456]: time="2025-01-13T21:26:38.468551366Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc\"" Jan 13 21:26:38.469237 containerd[1456]: time="2025-01-13T21:26:38.469190914Z" level=info msg="StartContainer for \"e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc\"" Jan 13 21:26:38.495439 systemd[1]: Started cri-containerd-e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc.scope - libcontainer container e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc. Jan 13 21:26:38.521382 containerd[1456]: time="2025-01-13T21:26:38.521328009Z" level=info msg="StartContainer for \"e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc\" returns successfully" Jan 13 21:26:38.532074 systemd[1]: cri-containerd-e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc.scope: Deactivated successfully. Jan 13 21:26:38.571048 containerd[1456]: time="2025-01-13T21:26:38.570982036Z" level=info msg="shim disconnected" id=e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc namespace=k8s.io Jan 13 21:26:38.571048 containerd[1456]: time="2025-01-13T21:26:38.571048262Z" level=warning msg="cleaning up after shim disconnected" id=e1165fb1b17db2121d0c02263608747d8b2b7666aa142f1727bff655e9950acc namespace=k8s.io Jan 13 21:26:38.571048 containerd[1456]: time="2025-01-13T21:26:38.571058181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:38.991054 systemd[1]: run-containerd-runc-k8s.io-e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba-runc.kObRkM.mount: Deactivated successfully. Jan 13 21:26:39.200904 kubelet[2592]: E0113 21:26:39.200871 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:39.202718 containerd[1456]: time="2025-01-13T21:26:39.202660923Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:26:39.223000 containerd[1456]: time="2025-01-13T21:26:39.222940988Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1\"" Jan 13 21:26:39.223609 containerd[1456]: time="2025-01-13T21:26:39.223545408Z" level=info msg="StartContainer for \"f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1\"" Jan 13 21:26:39.260414 systemd[1]: Started cri-containerd-f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1.scope - libcontainer container f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1. Jan 13 21:26:39.285319 containerd[1456]: time="2025-01-13T21:26:39.285221406Z" level=info msg="StartContainer for \"f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1\" returns successfully" Jan 13 21:26:39.292959 systemd[1]: cri-containerd-f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1.scope: Deactivated successfully. Jan 13 21:26:39.316563 containerd[1456]: time="2025-01-13T21:26:39.316471053Z" level=info msg="shim disconnected" id=f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1 namespace=k8s.io Jan 13 21:26:39.316563 containerd[1456]: time="2025-01-13T21:26:39.316532631Z" level=warning msg="cleaning up after shim disconnected" id=f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1 namespace=k8s.io Jan 13 21:26:39.316563 containerd[1456]: time="2025-01-13T21:26:39.316541417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:39.991194 systemd[1]: run-containerd-runc-k8s.io-f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1-runc.2bkQNg.mount: Deactivated successfully. Jan 13 21:26:39.991322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1138c55aa0e47d5b547835c5fd5c58896b3be11f0006a600cf87c28cfa328a1-rootfs.mount: Deactivated successfully. Jan 13 21:26:40.204680 kubelet[2592]: E0113 21:26:40.204640 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:40.206175 containerd[1456]: time="2025-01-13T21:26:40.206094562Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:26:40.224143 containerd[1456]: time="2025-01-13T21:26:40.224091769Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc\"" Jan 13 21:26:40.224724 containerd[1456]: time="2025-01-13T21:26:40.224694737Z" level=info msg="StartContainer for \"6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc\"" Jan 13 21:26:40.255426 systemd[1]: Started cri-containerd-6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc.scope - libcontainer container 6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc. Jan 13 21:26:40.283010 containerd[1456]: time="2025-01-13T21:26:40.282965674Z" level=info msg="StartContainer for \"6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc\" returns successfully" Jan 13 21:26:40.284769 systemd[1]: cri-containerd-6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc.scope: Deactivated successfully. Jan 13 21:26:40.311011 containerd[1456]: time="2025-01-13T21:26:40.310929016Z" level=info msg="shim disconnected" id=6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc namespace=k8s.io Jan 13 21:26:40.311011 containerd[1456]: time="2025-01-13T21:26:40.311014078Z" level=warning msg="cleaning up after shim disconnected" id=6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc namespace=k8s.io Jan 13 21:26:40.311239 containerd[1456]: time="2025-01-13T21:26:40.311026702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:40.959372 kubelet[2592]: E0113 21:26:40.959324 2592 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:26:40.991292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3371b04e5e808e237566c0a8e68ecc9ee1955e40521630b243eae3456a82fc-rootfs.mount: Deactivated successfully. Jan 13 21:26:41.209065 kubelet[2592]: E0113 21:26:41.209032 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:41.210869 containerd[1456]: time="2025-01-13T21:26:41.210754842Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:26:41.224455 containerd[1456]: time="2025-01-13T21:26:41.224393718Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2\"" Jan 13 21:26:41.225633 containerd[1456]: time="2025-01-13T21:26:41.225025499Z" level=info msg="StartContainer for \"352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2\"" Jan 13 21:26:41.258417 systemd[1]: Started cri-containerd-352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2.scope - libcontainer container 352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2. Jan 13 21:26:41.282719 systemd[1]: cri-containerd-352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2.scope: Deactivated successfully. Jan 13 21:26:41.285294 containerd[1456]: time="2025-01-13T21:26:41.285252278Z" level=info msg="StartContainer for \"352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2\" returns successfully" Jan 13 21:26:41.311809 containerd[1456]: time="2025-01-13T21:26:41.311720459Z" level=info msg="shim disconnected" id=352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2 namespace=k8s.io Jan 13 21:26:41.311809 containerd[1456]: time="2025-01-13T21:26:41.311803898Z" level=warning msg="cleaning up after shim disconnected" id=352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2 namespace=k8s.io Jan 13 21:26:41.311809 containerd[1456]: time="2025-01-13T21:26:41.311815229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:26:41.991287 systemd[1]: run-containerd-runc-k8s.io-352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2-runc.P2lBQX.mount: Deactivated successfully. Jan 13 21:26:41.991407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-352d125440ea9ed300d596697836124f64a39ddfbed79839b56fad01d5e206e2-rootfs.mount: Deactivated successfully. Jan 13 21:26:42.213448 kubelet[2592]: E0113 21:26:42.213412 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:42.215996 containerd[1456]: time="2025-01-13T21:26:42.215952061Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:26:42.233229 containerd[1456]: time="2025-01-13T21:26:42.233163441Z" level=info msg="CreateContainer within sandbox \"e536b5e3791c9100ce54c407c3fe127a05a6c542c4fe6729c0de745d7fe6cdba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539\"" Jan 13 21:26:42.233765 containerd[1456]: time="2025-01-13T21:26:42.233715190Z" level=info msg="StartContainer for \"9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539\"" Jan 13 21:26:42.265409 systemd[1]: Started cri-containerd-9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539.scope - libcontainer container 9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539. Jan 13 21:26:42.293677 containerd[1456]: time="2025-01-13T21:26:42.293617775Z" level=info msg="StartContainer for \"9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539\" returns successfully" Jan 13 21:26:42.709302 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:26:42.740297 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Jan 13 21:26:42.765303 kernel: DRBG: Continuing without Jitter RNG Jan 13 21:26:43.217599 kubelet[2592]: E0113 21:26:43.217561 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:43.593010 kubelet[2592]: I0113 21:26:43.592969 2592 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:26:43Z","lastTransitionTime":"2025-01-13T21:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:26:43.907131 kubelet[2592]: E0113 21:26:43.907023 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:44.220582 kubelet[2592]: E0113 21:26:44.219660 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:45.833046 systemd-networkd[1391]: lxc_health: Link UP Jan 13 21:26:45.838557 systemd-networkd[1391]: lxc_health: Gained carrier Jan 13 21:26:45.907530 kubelet[2592]: E0113 21:26:45.907481 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.118748 kubelet[2592]: E0113 21:26:46.116773 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.223405 kubelet[2592]: E0113 21:26:46.223335 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:46.240462 kubelet[2592]: I0113 21:26:46.240029 2592 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r6m4v" podStartSLOduration=9.240011483 podStartE2EDuration="9.240011483s" podCreationTimestamp="2025-01-13 21:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:26:43.23132411 +0000 UTC m=+92.394661583" watchObservedRunningTime="2025-01-13 21:26:46.240011483 +0000 UTC m=+95.403348936" Jan 13 21:26:47.224775 kubelet[2592]: E0113 21:26:47.224734 2592 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:26:47.658435 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 13 21:26:50.545052 systemd[1]: run-containerd-runc-k8s.io-9253fb25414a4bd9cb3116310490f49401350d6cf56183c919943ddc937b2539-runc.ccACLM.mount: Deactivated successfully. Jan 13 21:26:52.690019 sshd[4433]: pam_unix(sshd:session): session closed for user core Jan 13 21:26:52.693829 systemd[1]: sshd@29-10.0.0.97:22-10.0.0.1:57276.service: Deactivated successfully. Jan 13 21:26:52.695633 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:26:52.696245 systemd-logind[1442]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:26:52.697160 systemd-logind[1442]: Removed session 30.