Jan 17 12:09:16.907330 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:09:16.907353 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:09:16.907364 kernel: BIOS-provided physical RAM map: Jan 17 12:09:16.907371 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:09:16.907376 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:09:16.907382 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:09:16.907390 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:09:16.907396 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:09:16.907402 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:09:16.907408 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:09:16.907417 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:09:16.907423 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:09:16.907429 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:09:16.907435 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:09:16.907458 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:09:16.907465 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:09:16.907475 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:09:16.907481 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:09:16.907488 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:09:16.907494 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:09:16.907501 kernel: NX (Execute Disable) protection: active Jan 17 12:09:16.907508 kernel: APIC: Static calls initialized Jan 17 12:09:16.907514 kernel: efi: EFI v2.7 by EDK II Jan 17 12:09:16.907521 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:09:16.907527 kernel: SMBIOS 2.8 present. Jan 17 12:09:16.907534 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:09:16.907541 kernel: Hypervisor detected: KVM Jan 17 12:09:16.907549 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:09:16.907556 kernel: kvm-clock: using sched offset of 4339929451 cycles Jan 17 12:09:16.907563 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:09:16.907570 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:09:16.907577 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:09:16.907584 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:09:16.907591 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:09:16.907598 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:09:16.907605 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:09:16.907614 kernel: Using GB pages for direct mapping Jan 17 12:09:16.907620 kernel: Secure boot disabled Jan 17 12:09:16.907627 kernel: ACPI: Early table checksum verification disabled Jan 17 12:09:16.907634 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:09:16.907644 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:09:16.907652 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907659 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907668 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:09:16.907675 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907683 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907690 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907697 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:09:16.907704 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:09:16.907711 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:09:16.907721 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:09:16.907728 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:09:16.907735 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:09:16.907742 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:09:16.907749 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:09:16.907756 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:09:16.907763 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:09:16.907770 kernel: No NUMA configuration found Jan 17 12:09:16.907777 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:09:16.907787 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:09:16.907794 kernel: Zone ranges: Jan 17 12:09:16.907801 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:09:16.907809 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:09:16.907816 kernel: Normal empty Jan 17 12:09:16.907823 kernel: Movable zone start for each node Jan 17 12:09:16.907830 kernel: Early memory node ranges Jan 17 12:09:16.907837 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:09:16.907844 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:09:16.907851 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:09:16.907860 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:09:16.907867 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:09:16.907874 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:09:16.907882 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:09:16.907889 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:09:16.907896 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:09:16.907903 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:09:16.907910 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:09:16.907917 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:09:16.907927 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:09:16.907934 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:09:16.907941 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:09:16.907948 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:09:16.907955 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:09:16.907962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:09:16.907969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:09:16.907976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:09:16.907983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:09:16.907993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:09:16.908000 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:09:16.908007 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:09:16.908015 kernel: TSC deadline timer available Jan 17 12:09:16.908022 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:09:16.908029 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:09:16.908036 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:09:16.908043 kernel: kvm-guest: setup PV sched yield Jan 17 12:09:16.908050 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:09:16.908059 kernel: Booting paravirtualized kernel on KVM Jan 17 12:09:16.908066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:09:16.908074 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:09:16.908081 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:09:16.908088 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:09:16.908095 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:09:16.908102 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:09:16.908109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:09:16.908117 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:09:16.908138 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:09:16.908146 kernel: random: crng init done Jan 17 12:09:16.908153 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:09:16.908160 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:09:16.908167 kernel: Fallback order for Node 0: 0 Jan 17 12:09:16.908175 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:09:16.908182 kernel: Policy zone: DMA32 Jan 17 12:09:16.908189 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:09:16.908196 kernel: Memory: 2395620K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171120K reserved, 0K cma-reserved) Jan 17 12:09:16.908206 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:09:16.908213 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:09:16.908220 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:09:16.908228 kernel: Dynamic Preempt: voluntary Jan 17 12:09:16.908243 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:09:16.908258 kernel: rcu: RCU event tracing is enabled. Jan 17 12:09:16.908266 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:09:16.908273 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:09:16.908281 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:09:16.908288 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:09:16.908296 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:09:16.908305 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:09:16.908313 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:09:16.908320 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:09:16.908328 kernel: Console: colour dummy device 80x25 Jan 17 12:09:16.908335 kernel: printk: console [ttyS0] enabled Jan 17 12:09:16.908344 kernel: ACPI: Core revision 20230628 Jan 17 12:09:16.908352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:09:16.908360 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:09:16.908367 kernel: x2apic enabled Jan 17 12:09:16.908374 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:09:16.908382 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:09:16.908390 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:09:16.908397 kernel: kvm-guest: setup PV IPIs Jan 17 12:09:16.908404 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:09:16.908414 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:09:16.908421 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:09:16.908429 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:09:16.908436 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:09:16.908484 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:09:16.908492 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:09:16.908500 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:09:16.908508 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:09:16.908516 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:09:16.908526 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:09:16.908534 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:09:16.908542 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:09:16.908550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:09:16.908558 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:09:16.908566 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:09:16.908574 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:09:16.908582 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:09:16.908592 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:09:16.908600 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:09:16.908608 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:09:16.908616 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:09:16.908624 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:09:16.908632 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:09:16.908639 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:09:16.908647 kernel: landlock: Up and running. Jan 17 12:09:16.908655 kernel: SELinux: Initializing. Jan 17 12:09:16.908665 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:16.908673 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:09:16.908681 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:09:16.908689 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:09:16.908697 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:09:16.908705 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:09:16.908713 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:09:16.908720 kernel: ... version: 0 Jan 17 12:09:16.908728 kernel: ... bit width: 48 Jan 17 12:09:16.908738 kernel: ... generic registers: 6 Jan 17 12:09:16.908746 kernel: ... value mask: 0000ffffffffffff Jan 17 12:09:16.908754 kernel: ... max period: 00007fffffffffff Jan 17 12:09:16.908762 kernel: ... fixed-purpose events: 0 Jan 17 12:09:16.908770 kernel: ... event mask: 000000000000003f Jan 17 12:09:16.908777 kernel: signal: max sigframe size: 1776 Jan 17 12:09:16.908785 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:09:16.908793 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:09:16.908801 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:09:16.908811 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:09:16.908818 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:09:16.908826 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:09:16.908834 kernel: smpboot: Max logical packages: 1 Jan 17 12:09:16.908841 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:09:16.908849 kernel: devtmpfs: initialized Jan 17 12:09:16.908857 kernel: x86/mm: Memory block size: 128MB Jan 17 12:09:16.908865 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:09:16.908873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:09:16.908881 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:09:16.908891 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:09:16.908899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:09:16.908907 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:09:16.908914 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:09:16.908923 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:09:16.908933 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:09:16.908942 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:09:16.908951 kernel: audit: type=2000 audit(1737115755.577:1): state=initialized audit_enabled=0 res=1 Jan 17 12:09:16.908964 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:09:16.908972 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:09:16.908981 kernel: cpuidle: using governor menu Jan 17 12:09:16.908990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:09:16.908997 kernel: dca service started, version 1.12.1 Jan 17 12:09:16.909008 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:09:16.909017 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:09:16.909025 kernel: PCI: Using configuration type 1 for base access Jan 17 12:09:16.909033 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:09:16.909043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:09:16.909050 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:09:16.909058 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:09:16.909066 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:09:16.909073 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:09:16.909080 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:09:16.909088 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:09:16.909095 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:09:16.909103 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:09:16.909113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:09:16.909120 kernel: ACPI: Interpreter enabled Jan 17 12:09:16.909136 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:09:16.909144 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:09:16.909152 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:09:16.909159 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:09:16.909167 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:09:16.909174 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:09:16.909373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:09:16.909539 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:09:16.909662 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:09:16.909672 kernel: PCI host bridge to bus 0000:00 Jan 17 12:09:16.909803 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:09:16.909913 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:09:16.910021 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:09:16.910145 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:09:16.910254 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:09:16.910363 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:09:16.910490 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:09:16.910630 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:09:16.910760 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:09:16.910880 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:09:16.911006 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:09:16.911137 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:09:16.911271 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:09:16.911512 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:09:16.911654 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:09:16.911776 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:09:16.911922 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:09:16.912043 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:09:16.912188 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:09:16.912311 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:09:16.912431 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:09:16.912605 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:09:16.912767 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:09:16.912898 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:09:16.913039 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:09:16.913201 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:09:16.913432 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:09:16.913617 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:09:16.913773 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:09:16.913931 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:09:16.914060 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:09:16.914190 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:09:16.914323 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:09:16.914456 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:09:16.914466 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:09:16.914474 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:09:16.914482 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:09:16.914490 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:09:16.914502 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:09:16.914509 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:09:16.914517 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:09:16.914525 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:09:16.914532 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:09:16.914540 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:09:16.914547 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:09:16.914555 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:09:16.914563 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:09:16.914573 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:09:16.914581 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:09:16.914588 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:09:16.914596 kernel: iommu: Default domain type: Translated Jan 17 12:09:16.914604 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:09:16.914611 kernel: efivars: Registered efivars operations Jan 17 12:09:16.914619 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:09:16.914627 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:09:16.914634 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:09:16.914644 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:09:16.914655 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:09:16.914665 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:09:16.914821 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:09:16.914982 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:09:16.915114 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:09:16.915125 kernel: vgaarb: loaded Jan 17 12:09:16.915143 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:09:16.915155 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:09:16.915163 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:09:16.915171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:09:16.915180 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:09:16.915188 kernel: pnp: PnP ACPI init Jan 17 12:09:16.915320 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:09:16.915331 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:09:16.915339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:09:16.915350 kernel: NET: Registered PF_INET protocol family Jan 17 12:09:16.915358 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:09:16.915366 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:09:16.915373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:09:16.915381 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:09:16.915389 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:09:16.915396 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:09:16.915404 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:16.915412 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:09:16.915422 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:09:16.915430 kernel: NET: Registered PF_XDP protocol family Jan 17 12:09:16.915570 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:09:16.915692 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:09:16.915805 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:09:16.915915 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:09:16.916025 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:09:16.916143 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:09:16.916259 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:09:16.916422 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:09:16.916434 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:09:16.916530 kernel: Initialise system trusted keyrings Jan 17 12:09:16.916538 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:09:16.916546 kernel: Key type asymmetric registered Jan 17 12:09:16.916554 kernel: Asymmetric key parser 'x509' registered Jan 17 12:09:16.916561 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:09:16.916569 kernel: io scheduler mq-deadline registered Jan 17 12:09:16.916581 kernel: io scheduler kyber registered Jan 17 12:09:16.916589 kernel: io scheduler bfq registered Jan 17 12:09:16.916596 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:09:16.916605 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:09:16.916613 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:09:16.916621 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:09:16.916628 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:09:16.916636 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:09:16.916644 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:09:16.916655 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:09:16.916663 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:09:16.916789 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:09:16.916800 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:09:16.916910 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:09:16.917021 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:09:16 UTC (1737115756) Jan 17 12:09:16.917141 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:09:16.917151 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:09:16.917163 kernel: efifb: probing for efifb Jan 17 12:09:16.917171 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:09:16.917179 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:09:16.917195 kernel: efifb: scrolling: redraw Jan 17 12:09:16.917215 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:09:16.917225 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:09:16.917249 kernel: fb0: EFI VGA frame buffer device Jan 17 12:09:16.917260 kernel: pstore: Using crash dump compression: deflate Jan 17 12:09:16.917268 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:09:16.917281 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:09:16.917289 kernel: Segment Routing with IPv6 Jan 17 12:09:16.917297 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:09:16.917304 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:09:16.917312 kernel: Key type dns_resolver registered Jan 17 12:09:16.917320 kernel: IPI shorthand broadcast: enabled Jan 17 12:09:16.917328 kernel: sched_clock: Marking stable (664003491, 220222428)->(967707787, -83481868) Jan 17 12:09:16.917338 kernel: registered taskstats version 1 Jan 17 12:09:16.917353 kernel: Loading compiled-in X.509 certificates Jan 17 12:09:16.917371 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:09:16.917382 kernel: Key type .fscrypt registered Jan 17 12:09:16.917392 kernel: Key type fscrypt-provisioning registered Jan 17 12:09:16.917402 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:09:16.917413 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:09:16.917421 kernel: ima: No architecture policies found Jan 17 12:09:16.917429 kernel: clk: Disabling unused clocks Jan 17 12:09:16.917437 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:09:16.917461 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:09:16.917469 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:09:16.917477 kernel: Run /init as init process Jan 17 12:09:16.917484 kernel: with arguments: Jan 17 12:09:16.917492 kernel: /init Jan 17 12:09:16.917500 kernel: with environment: Jan 17 12:09:16.917508 kernel: HOME=/ Jan 17 12:09:16.917515 kernel: TERM=linux Jan 17 12:09:16.917523 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:09:16.917536 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:16.917547 systemd[1]: Detected virtualization kvm. Jan 17 12:09:16.917555 systemd[1]: Detected architecture x86-64. Jan 17 12:09:16.917563 systemd[1]: Running in initrd. Jan 17 12:09:16.917576 systemd[1]: No hostname configured, using default hostname. Jan 17 12:09:16.917584 systemd[1]: Hostname set to . Jan 17 12:09:16.917593 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:09:16.917601 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:09:16.917610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:16.917618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:16.917627 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:09:16.917635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:16.917646 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:09:16.917655 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:09:16.917665 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:09:16.917674 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:09:16.917682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:16.917690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:16.917699 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:16.917710 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:16.917718 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:16.917726 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:16.917735 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:16.917743 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:16.917751 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:09:16.917760 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:09:16.917769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:16.917777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:16.917788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:16.917796 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:16.917805 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:09:16.917813 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:16.917822 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:09:16.917830 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:09:16.917838 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:16.917846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:16.917857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:16.917866 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:16.917874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:16.917882 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:09:16.917916 systemd-journald[192]: Collecting audit messages is disabled. Jan 17 12:09:16.917938 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:09:16.917947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:16.917955 systemd-journald[192]: Journal started Jan 17 12:09:16.917975 systemd-journald[192]: Runtime Journal (/run/log/journal/dcc8c56f5b5b4e3aae7dd2a2bb68ae93) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:09:16.915037 systemd-modules-load[193]: Inserted module 'overlay' Jan 17 12:09:16.920536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:16.923639 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:16.923969 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:16.932431 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:16.935577 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:16.948465 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:09:16.953065 kernel: Bridge firewalling registered Jan 17 12:09:16.950989 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 17 12:09:16.952259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:16.952594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:16.953288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:16.955937 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:16.972640 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:09:16.974600 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:16.986772 dracut-cmdline[223]: dracut-dracut-053 Jan 17 12:09:16.988302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:16.991093 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:16.994744 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:09:17.031011 systemd-resolved[238]: Positive Trust Anchors: Jan 17 12:09:17.031035 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:17.031078 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:17.034295 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 17 12:09:17.035564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:17.041778 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:17.102483 kernel: SCSI subsystem initialized Jan 17 12:09:17.112483 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:09:17.123496 kernel: iscsi: registered transport (tcp) Jan 17 12:09:17.145760 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:09:17.145841 kernel: QLogic iSCSI HBA Driver Jan 17 12:09:17.195051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:17.210732 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:09:17.234539 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:09:17.234583 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:09:17.235573 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:09:17.276487 kernel: raid6: avx2x4 gen() 27093 MB/s Jan 17 12:09:17.293485 kernel: raid6: avx2x2 gen() 26091 MB/s Jan 17 12:09:17.310659 kernel: raid6: avx2x1 gen() 20633 MB/s Jan 17 12:09:17.310717 kernel: raid6: using algorithm avx2x4 gen() 27093 MB/s Jan 17 12:09:17.328779 kernel: raid6: .... xor() 6135 MB/s, rmw enabled Jan 17 12:09:17.328871 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:09:17.351485 kernel: xor: automatically using best checksumming function avx Jan 17 12:09:17.525493 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:09:17.539583 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:17.550725 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:17.567184 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 17 12:09:17.573313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:17.589638 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:09:17.603145 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 17 12:09:17.637059 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:17.648597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:17.716016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:17.730603 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:09:17.750031 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:17.758678 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:09:17.759737 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:17.763740 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:09:17.796017 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:09:17.796050 kernel: AES CTR mode by8 optimization enabled Jan 17 12:09:17.796065 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:09:17.796262 kernel: libata version 3.00 loaded. Jan 17 12:09:17.796279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:09:17.796304 kernel: GPT:9289727 != 19775487 Jan 17 12:09:17.796329 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:09:17.796361 kernel: GPT:9289727 != 19775487 Jan 17 12:09:17.796394 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:09:17.796412 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:09:17.763744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:17.799420 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:09:17.820490 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:09:17.820512 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:09:17.820710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:09:17.820872 kernel: scsi host0: ahci Jan 17 12:09:17.821160 kernel: scsi host1: ahci Jan 17 12:09:17.821349 kernel: scsi host2: ahci Jan 17 12:09:17.821518 kernel: scsi host3: ahci Jan 17 12:09:17.822274 kernel: scsi host4: ahci Jan 17 12:09:17.822481 kernel: scsi host5: ahci Jan 17 12:09:17.822988 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:09:17.823009 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:09:17.823022 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:09:17.823034 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:09:17.823046 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:09:17.823057 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:09:17.823069 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jan 17 12:09:17.765137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:17.826488 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Jan 17 12:09:17.786644 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:09:17.798162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:17.798383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:17.802583 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:17.804557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:17.804796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:17.807673 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:17.823526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:17.830058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:17.853065 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:09:17.863215 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:09:17.870073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:09:17.886658 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:09:17.888075 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:09:17.898594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:09:17.899769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:17.899840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:17.905866 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:17.908992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:17.924056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:17.926356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:09:17.945963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:17.994330 disk-uuid[559]: Primary Header is updated. Jan 17 12:09:17.994330 disk-uuid[559]: Secondary Entries is updated. Jan 17 12:09:17.994330 disk-uuid[559]: Secondary Header is updated. Jan 17 12:09:17.998483 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:09:18.002473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:09:18.132491 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:09:18.132586 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:09:18.137252 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:09:18.137346 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:09:18.138472 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:09:18.139483 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:09:18.140775 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:09:18.140790 kernel: ata3.00: applying bridge limits Jan 17 12:09:18.141470 kernel: ata3.00: configured for UDMA/100 Jan 17 12:09:18.143472 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:09:18.187476 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:09:18.201285 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:09:18.201305 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:09:19.007489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:09:19.007808 disk-uuid[573]: The operation has completed successfully. Jan 17 12:09:19.038920 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:09:19.039080 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:09:19.067652 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:09:19.073483 sh[596]: Success Jan 17 12:09:19.087480 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:09:19.121037 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:09:19.141138 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:09:19.144377 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:09:19.159042 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:09:19.159144 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:19.159163 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:09:19.160015 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:09:19.160769 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:09:19.165634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:09:19.168292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:09:19.175635 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:09:19.177558 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:09:19.191278 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:19.191335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:19.191350 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:19.194523 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:19.205496 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:09:19.207581 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:19.217836 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:09:19.227756 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:09:19.283401 ignition[697]: Ignition 2.19.0 Jan 17 12:09:19.283415 ignition[697]: Stage: fetch-offline Jan 17 12:09:19.283465 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:19.283476 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:19.283567 ignition[697]: parsed url from cmdline: "" Jan 17 12:09:19.283571 ignition[697]: no config URL provided Jan 17 12:09:19.283577 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:09:19.283586 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:09:19.283615 ignition[697]: op(1): [started] loading QEMU firmware config module Jan 17 12:09:19.283620 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:09:19.291645 ignition[697]: op(1): [finished] loading QEMU firmware config module Jan 17 12:09:19.309674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:19.316647 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:19.335510 ignition[697]: parsing config with SHA512: a41aa5fff477fe3591205b3efd2adc62eb963b56d660bc3ffb7711ed39f91f936da757ea51ed31615ecbd528974e43112cfbce5b2238323f990cdaa0ae98ab33 Jan 17 12:09:19.338491 systemd-networkd[785]: lo: Link UP Jan 17 12:09:19.338500 systemd-networkd[785]: lo: Gained carrier Jan 17 12:09:19.340175 systemd-networkd[785]: Enumeration completed Jan 17 12:09:19.340272 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:19.340663 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:19.340667 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:19.341427 systemd-networkd[785]: eth0: Link UP Jan 17 12:09:19.341431 systemd-networkd[785]: eth0: Gained carrier Jan 17 12:09:19.341437 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:19.342179 systemd[1]: Reached target network.target - Network. Jan 17 12:09:19.356211 unknown[697]: fetched base config from "system" Jan 17 12:09:19.356227 unknown[697]: fetched user config from "qemu" Jan 17 12:09:19.357699 ignition[697]: fetch-offline: fetch-offline passed Jan 17 12:09:19.357817 ignition[697]: Ignition finished successfully Jan 17 12:09:19.360245 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:19.362517 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:09:19.364580 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:09:19.368729 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:09:19.385353 ignition[788]: Ignition 2.19.0 Jan 17 12:09:19.385367 ignition[788]: Stage: kargs Jan 17 12:09:19.385580 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:19.385594 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:19.386561 ignition[788]: kargs: kargs passed Jan 17 12:09:19.386611 ignition[788]: Ignition finished successfully Jan 17 12:09:19.390923 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:09:19.402752 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:09:19.416868 ignition[797]: Ignition 2.19.0 Jan 17 12:09:19.416880 ignition[797]: Stage: disks Jan 17 12:09:19.417048 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:19.417068 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:19.420125 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:09:19.417835 ignition[797]: disks: disks passed Jan 17 12:09:19.422353 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:19.417886 ignition[797]: Ignition finished successfully Jan 17 12:09:19.424658 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:09:19.426702 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:19.426786 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:19.427174 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:19.436645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:09:19.449015 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:09:19.455434 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:09:19.463642 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:09:19.556489 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:09:19.557308 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:09:19.559610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:19.574604 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:19.577785 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:09:19.580418 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:09:19.580493 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:09:19.591428 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 17 12:09:19.591482 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:19.591498 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:19.591511 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:19.591525 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:19.582540 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:19.594506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:19.596418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:09:19.600222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:09:19.638747 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:09:19.644990 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:09:19.648743 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:09:19.652066 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:09:19.734835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:19.747583 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:09:19.749418 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:09:19.755487 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:19.772930 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:09:19.779482 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:09:19.779482 ignition[928]: INFO : Stage: mount Jan 17 12:09:19.781405 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:19.781405 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:19.781405 ignition[928]: INFO : mount: mount passed Jan 17 12:09:19.781405 ignition[928]: INFO : Ignition finished successfully Jan 17 12:09:19.785626 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:09:19.799675 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:09:20.159262 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:09:20.167781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:20.178988 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 17 12:09:20.179063 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:20.179079 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:20.181354 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:20.184481 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:20.185731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:20.213103 ignition[959]: INFO : Ignition 2.19.0 Jan 17 12:09:20.213103 ignition[959]: INFO : Stage: files Jan 17 12:09:20.215577 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:20.215577 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:20.215577 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:09:20.215577 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:09:20.215577 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:09:20.222983 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:09:20.224721 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:09:20.226730 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 12:09:20.228104 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:09:20.230969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:20.233004 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:09:20.277599 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:09:20.372465 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:20.374855 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:20.374855 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:09:20.413637 systemd-networkd[785]: eth0: Gained IPv6LL Jan 17 12:09:20.848514 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:09:20.937760 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:20.939887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 17 12:09:21.249227 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:09:21.957073 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 17 12:09:21.957073 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 12:09:21.960723 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:09:22.057509 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:09:22.063497 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:09:22.079778 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:09:22.079778 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.079778 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:22.079778 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.079778 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:22.079778 ignition[959]: INFO : files: files passed Jan 17 12:09:22.079778 ignition[959]: INFO : Ignition finished successfully Jan 17 12:09:22.106092 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:09:22.121684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:09:22.124755 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:09:22.126749 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:09:22.126913 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:09:22.153453 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:09:22.157945 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.157945 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.169968 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:22.173795 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:22.177073 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.184649 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:09:22.213497 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:09:22.213666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:09:22.216379 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:09:22.218486 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:09:22.220594 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:09:22.230648 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:09:22.247785 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.250795 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:09:22.266229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:22.269125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:22.272010 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:09:22.274283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:09:22.275570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:22.278662 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:09:22.281207 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:09:22.283470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:09:22.285900 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:22.288781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:22.291530 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:09:22.293919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:22.296464 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:09:22.298529 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:09:22.300763 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:09:22.302757 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:09:22.304038 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:22.306993 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:22.309690 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:22.312521 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:09:22.313804 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:22.316965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:09:22.318173 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:22.320965 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:09:22.334567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:22.337567 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:09:22.339741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:09:22.344495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:22.347863 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:09:22.350148 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:09:22.352108 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:09:22.353175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:22.355571 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:09:22.356657 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:22.359170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:09:22.360605 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:22.363696 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:09:22.364882 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:09:22.395708 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:09:22.397714 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:09:22.398824 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:22.402526 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:09:22.404360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:09:22.405480 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:22.408137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:09:22.409480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:22.414340 ignition[1013]: INFO : Ignition 2.19.0 Jan 17 12:09:22.414340 ignition[1013]: INFO : Stage: umount Jan 17 12:09:22.414340 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:22.414340 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:09:22.414340 ignition[1013]: INFO : umount: umount passed Jan 17 12:09:22.414340 ignition[1013]: INFO : Ignition finished successfully Jan 17 12:09:22.416331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:09:22.416474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:09:22.435629 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:09:22.435761 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:09:22.441785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:09:22.443148 systemd[1]: Stopped target network.target - Network. Jan 17 12:09:22.444920 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:09:22.444994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:09:22.447834 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:09:22.447887 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:09:22.450806 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:09:22.450860 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:09:22.454011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:09:22.454080 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:22.465306 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:09:22.468830 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:09:22.474513 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 17 12:09:22.490017 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:09:22.493125 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:09:22.496748 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:09:22.498016 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:09:22.501340 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:09:22.502462 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:22.521579 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:09:22.522642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:09:22.523781 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:22.526365 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:09:22.526417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:22.530167 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:09:22.531239 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:22.534334 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:09:22.534394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:22.537939 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:22.585966 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:09:22.587316 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:22.591136 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:09:22.592326 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:09:22.596597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:09:22.597897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:22.600472 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:09:22.601669 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:22.604413 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:09:22.604570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:22.607860 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:09:22.607928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:22.610989 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:22.611059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:22.626723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:09:22.626851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:09:22.626926 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:22.652738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:22.652840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:22.656635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:09:22.656766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:09:22.936819 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:09:22.936983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:09:22.939148 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:09:22.940807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:09:22.940866 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:22.951764 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:09:22.962907 systemd[1]: Switching root. Jan 17 12:09:22.994735 systemd-journald[192]: Journal stopped Jan 17 12:09:24.294120 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 17 12:09:24.294204 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:09:24.294218 kernel: SELinux: policy capability open_perms=1 Jan 17 12:09:24.294230 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:09:24.294241 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:09:24.294251 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:09:24.294263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:09:24.294277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:09:24.294294 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:09:24.294310 kernel: audit: type=1403 audit(1737115763.487:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:09:24.294322 systemd[1]: Successfully loaded SELinux policy in 43.370ms. Jan 17 12:09:24.294342 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.286ms. Jan 17 12:09:24.294355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:24.294367 systemd[1]: Detected virtualization kvm. Jan 17 12:09:24.294379 systemd[1]: Detected architecture x86-64. Jan 17 12:09:24.294390 systemd[1]: Detected first boot. Jan 17 12:09:24.294406 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:09:24.294418 zram_generator::config[1057]: No configuration found. Jan 17 12:09:24.294431 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:09:24.294456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:09:24.294468 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:09:24.294480 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:24.294492 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:09:24.294504 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:09:24.294518 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:09:24.294530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:09:24.294542 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:09:24.294554 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:09:24.294565 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:09:24.294577 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:09:24.294589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:24.294606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:24.294618 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:09:24.294632 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:09:24.294644 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:09:24.294656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:24.294668 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:09:24.294682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:24.294694 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:09:24.294706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:09:24.294722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:09:24.294736 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:09:24.294748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:24.294760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:24.294772 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:24.294784 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:24.294795 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:09:24.294807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:09:24.294818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:24.294831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:24.294845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:24.294856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:09:24.294868 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:09:24.294880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:09:24.294892 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:09:24.294904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:24.294916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:09:24.294935 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:09:24.294949 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:09:24.294964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:09:24.294982 systemd[1]: Reached target machines.target - Containers. Jan 17 12:09:24.294996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:09:24.295008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:24.295022 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:24.295035 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:09:24.295049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:24.295061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:24.295075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:24.295087 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:09:24.295099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:24.295111 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:09:24.295123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:09:24.295135 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:09:24.295147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:09:24.295158 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:09:24.295173 kernel: loop: module loaded Jan 17 12:09:24.295184 kernel: fuse: init (API version 7.39) Jan 17 12:09:24.295195 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:24.295207 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:24.295220 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:09:24.295232 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:09:24.295261 systemd-journald[1124]: Collecting audit messages is disabled. Jan 17 12:09:24.295285 systemd-journald[1124]: Journal started Jan 17 12:09:24.295308 systemd-journald[1124]: Runtime Journal (/run/log/journal/dcc8c56f5b5b4e3aae7dd2a2bb68ae93) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:09:24.029502 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:09:24.049656 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:09:24.050302 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:09:24.298660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:24.303059 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:09:24.304259 kernel: ACPI: bus type drm_connector registered Jan 17 12:09:24.304287 systemd[1]: Stopped verity-setup.service. Jan 17 12:09:24.304308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:24.309481 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:24.311150 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:09:24.312774 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:09:24.314516 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:09:24.316003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:09:24.317684 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:09:24.319265 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:09:24.320967 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:09:24.323014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:24.325007 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:09:24.325246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:09:24.327417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:24.327658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:24.329684 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:24.329934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:24.331993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:24.332219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:24.334390 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:09:24.334621 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:09:24.336583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:24.336804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:24.338814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:24.340837 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:09:24.343000 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:09:24.359766 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:09:24.372626 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:09:24.375306 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:09:24.376683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:09:24.376714 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:24.378968 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:09:24.381473 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:09:24.385603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:09:24.387076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:24.388816 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:09:24.392504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:09:24.394589 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:24.396194 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:09:24.397968 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:24.399370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:24.403687 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:09:24.407673 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:09:24.418949 systemd-journald[1124]: Time spent on flushing to /var/log/journal/dcc8c56f5b5b4e3aae7dd2a2bb68ae93 is 16.800ms for 998 entries. Jan 17 12:09:24.418949 systemd-journald[1124]: System Journal (/var/log/journal/dcc8c56f5b5b4e3aae7dd2a2bb68ae93) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:09:24.451822 systemd-journald[1124]: Received client request to flush runtime journal. Jan 17 12:09:24.451936 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:09:24.413564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:24.416182 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:09:24.420324 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:09:24.422196 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:09:24.441721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:09:24.445204 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:09:24.447976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:24.452659 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:09:24.459720 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:09:24.462008 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:09:24.465488 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:09:24.466580 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:09:24.475106 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:09:24.482700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:24.488314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:09:24.490204 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:09:24.495623 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:09:24.511624 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:09:24.511649 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 17 12:09:24.522572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:24.529464 kernel: loop2: detected capacity change from 0 to 210664 Jan 17 12:09:24.569574 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:09:24.583483 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:09:24.594475 kernel: loop5: detected capacity change from 0 to 210664 Jan 17 12:09:24.600752 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:09:24.602014 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 17 12:09:24.606413 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:09:24.606430 systemd[1]: Reloading... Jan 17 12:09:24.671555 zram_generator::config[1226]: No configuration found. Jan 17 12:09:24.763122 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:09:24.797025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:24.849769 systemd[1]: Reloading finished in 242 ms. Jan 17 12:09:24.881906 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:09:24.883588 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:09:24.896659 systemd[1]: Starting ensure-sysext.service... Jan 17 12:09:24.916356 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:24.924854 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:09:24.924872 systemd[1]: Reloading... Jan 17 12:09:24.941241 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:09:24.941623 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:09:24.942637 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:09:24.943065 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 17 12:09:24.943172 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 17 12:09:24.947714 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:24.947731 systemd-tmpfiles[1261]: Skipping /boot Jan 17 12:09:24.962490 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:24.962508 systemd-tmpfiles[1261]: Skipping /boot Jan 17 12:09:25.006472 zram_generator::config[1288]: No configuration found. Jan 17 12:09:25.110375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:25.162078 systemd[1]: Reloading finished in 236 ms. Jan 17 12:09:25.181397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:25.198952 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:25.273699 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:09:25.298658 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:09:25.302527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:25.306594 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:09:25.339726 augenrules[1346]: No rules Jan 17 12:09:25.340988 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:25.345508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:25.345693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:25.358767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:25.361246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:25.364443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:25.366325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:25.403883 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:09:25.405358 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:25.406919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:25.407165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:25.409199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:25.409486 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:25.413712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:09:25.415661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:25.416115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:25.428726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:09:25.435382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:25.435937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:25.441753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:25.444972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:25.448721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:25.453057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:25.454343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:25.454578 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:25.456537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:09:25.458691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:25.458935 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:25.460670 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:09:25.463661 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:25.464179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:25.466250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:09:25.468911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:25.469091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:25.471076 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:25.471263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:25.476264 systemd[1]: Finished ensure-sysext.service. Jan 17 12:09:25.482145 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:25.482225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:25.500685 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:09:25.503753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:25.506565 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:09:25.507909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:09:25.524673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:09:25.536859 systemd-resolved[1343]: Positive Trust Anchors: Jan 17 12:09:25.536875 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:25.536915 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:25.540791 systemd-resolved[1343]: Defaulting to hostname 'linux'. Jan 17 12:09:25.542612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:25.544025 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:25.545467 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Jan 17 12:09:25.565288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:25.576358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:25.587685 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:09:25.589301 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:09:25.645518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:09:25.648495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Jan 17 12:09:25.649941 systemd-networkd[1385]: lo: Link UP Jan 17 12:09:25.650275 systemd-networkd[1385]: lo: Gained carrier Jan 17 12:09:25.652515 systemd-networkd[1385]: Enumeration completed Jan 17 12:09:25.652976 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:25.653255 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:25.653330 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:25.654306 systemd-networkd[1385]: eth0: Link UP Jan 17 12:09:25.654372 systemd-networkd[1385]: eth0: Gained carrier Jan 17 12:09:25.654435 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:25.655206 systemd[1]: Reached target network.target - Network. Jan 17 12:09:25.662633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:09:25.668541 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:09:25.672838 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Jan 17 12:09:25.700016 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:09:25.700231 systemd-timesyncd[1377]: Initial clock synchronization to Fri 2025-01-17 12:09:25.593893 UTC. Jan 17 12:09:25.707332 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:25.720475 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 12:09:25.724527 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:09:25.728356 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:09:25.734407 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:09:25.734640 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:09:25.736310 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:09:25.747468 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 12:09:25.778809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:09:25.790863 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:09:25.796760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:25.800511 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:09:25.827255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:25.829649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:25.883132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:25.885411 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:09:25.912612 kernel: kvm_amd: TSC scaling supported Jan 17 12:09:25.912700 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:09:25.912714 kernel: kvm_amd: Nested Paging enabled Jan 17 12:09:25.913621 kernel: kvm_amd: LBR virtualization supported Jan 17 12:09:25.913639 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:09:25.914830 kernel: kvm_amd: Virtual GIF supported Jan 17 12:09:25.937487 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:09:25.962500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:25.983768 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:09:25.994758 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:09:26.005116 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:26.040922 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:09:26.042588 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:26.043871 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:26.045194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:09:26.046624 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:09:26.048327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:09:26.049627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:09:26.051056 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:09:26.052483 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:09:26.052516 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:26.053611 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:26.055467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:09:26.058115 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:09:26.064945 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:09:26.067391 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:09:26.069095 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:09:26.070389 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:26.071514 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:26.072628 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:26.072656 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:26.073594 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:09:26.075995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:09:26.080480 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:26.080876 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:09:26.086620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:09:26.087924 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:09:26.090847 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:09:26.091777 jq[1436]: false Jan 17 12:09:26.093751 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:09:26.098636 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:09:26.106649 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:09:26.111825 extend-filesystems[1437]: Found loop3 Jan 17 12:09:26.111825 extend-filesystems[1437]: Found loop4 Jan 17 12:09:26.111825 extend-filesystems[1437]: Found loop5 Jan 17 12:09:26.111825 extend-filesystems[1437]: Found sr0 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda1 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda2 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda3 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found usr Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda4 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda6 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda7 Jan 17 12:09:26.116390 extend-filesystems[1437]: Found vda9 Jan 17 12:09:26.116390 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 17 12:09:26.113912 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:09:26.127010 dbus-daemon[1435]: [system] SELinux support is enabled Jan 17 12:09:26.128098 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:09:26.128816 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:09:26.130636 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:09:26.133105 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:09:26.136866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:09:26.144647 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:09:26.145029 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:09:26.145466 jq[1451]: true Jan 17 12:09:26.145556 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:09:26.145788 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:09:26.181751 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:09:26.183823 update_engine[1450]: I20250117 12:09:26.182343 1450 main.cc:92] Flatcar Update Engine starting Jan 17 12:09:26.183823 update_engine[1450]: I20250117 12:09:26.183672 1450 update_check_scheduler.cc:74] Next update check in 9m51s Jan 17 12:09:26.185103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:09:26.185588 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:09:26.186176 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 17 12:09:26.191131 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:09:26.198468 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1399) Jan 17 12:09:26.198510 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:09:26.210688 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:09:26.237797 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:09:26.263014 jq[1461]: true Jan 17 12:09:26.234481 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:09:26.263211 tar[1454]: linux-amd64/helm Jan 17 12:09:26.263424 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:09:26.263424 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:09:26.263424 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:09:26.244881 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:09:26.298563 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 17 12:09:26.244914 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:09:26.248673 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:09:26.248693 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:09:26.257800 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:09:26.263158 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:09:26.263478 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:09:26.300297 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:09:26.300317 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:09:26.304616 systemd-logind[1445]: New seat seat0. Jan 17 12:09:26.309969 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:09:26.349280 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:09:26.375733 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:09:26.382542 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:09:26.385350 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:09:26.389433 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:09:26.464179 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:09:26.472826 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:09:26.481570 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:09:26.481875 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:09:26.486509 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:09:26.529425 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:09:26.548790 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:09:26.552491 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:09:26.553992 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:09:26.727179 containerd[1463]: time="2025-01-17T12:09:26.726955832Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:09:26.753923 containerd[1463]: time="2025-01-17T12:09:26.753847018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.755959 containerd[1463]: time="2025-01-17T12:09:26.755899120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:26.755959 containerd[1463]: time="2025-01-17T12:09:26.755938133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:09:26.755959 containerd[1463]: time="2025-01-17T12:09:26.755955351Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:09:26.756201 containerd[1463]: time="2025-01-17T12:09:26.756177808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:09:26.756239 containerd[1463]: time="2025-01-17T12:09:26.756204102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756306 containerd[1463]: time="2025-01-17T12:09:26.756280109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756306 containerd[1463]: time="2025-01-17T12:09:26.756302751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756558 containerd[1463]: time="2025-01-17T12:09:26.756524631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756558 containerd[1463]: time="2025-01-17T12:09:26.756546157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756617 containerd[1463]: time="2025-01-17T12:09:26.756562051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756617 containerd[1463]: time="2025-01-17T12:09:26.756572333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756683 containerd[1463]: time="2025-01-17T12:09:26.756663517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.756967 containerd[1463]: time="2025-01-17T12:09:26.756931127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:26.757130 containerd[1463]: time="2025-01-17T12:09:26.757093820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:26.757130 containerd[1463]: time="2025-01-17T12:09:26.757120114Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:09:26.757291 containerd[1463]: time="2025-01-17T12:09:26.757262772Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:09:26.757347 containerd[1463]: time="2025-01-17T12:09:26.757330588Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:09:26.764156 containerd[1463]: time="2025-01-17T12:09:26.764100303Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:09:26.764226 containerd[1463]: time="2025-01-17T12:09:26.764175265Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:09:26.764226 containerd[1463]: time="2025-01-17T12:09:26.764200394Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:09:26.764226 containerd[1463]: time="2025-01-17T12:09:26.764222201Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:09:26.764319 containerd[1463]: time="2025-01-17T12:09:26.764244623Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:09:26.764518 containerd[1463]: time="2025-01-17T12:09:26.764489225Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:09:26.765276 containerd[1463]: time="2025-01-17T12:09:26.764970982Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:09:26.765661 containerd[1463]: time="2025-01-17T12:09:26.765628907Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:09:26.765767 containerd[1463]: time="2025-01-17T12:09:26.765733518Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:09:26.765849 containerd[1463]: time="2025-01-17T12:09:26.765829100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:09:26.765922 containerd[1463]: time="2025-01-17T12:09:26.765907466Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.765994 containerd[1463]: time="2025-01-17T12:09:26.765976884Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766298 containerd[1463]: time="2025-01-17T12:09:26.766259633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766548 containerd[1463]: time="2025-01-17T12:09:26.766524866Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766582 containerd[1463]: time="2025-01-17T12:09:26.766561778Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766581813Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766600156Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766617403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766646374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766665384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766682283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766722849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766741490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766769735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766787749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766804319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766821647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.766838 containerd[1463]: time="2025-01-17T12:09:26.766843712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766858789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766875181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766891622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766911368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766940141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766956174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.766969849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767034559Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767055399Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767068755Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767082739Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767094134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767110208Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:09:26.767161 containerd[1463]: time="2025-01-17T12:09:26.767144554Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:09:26.767526 containerd[1463]: time="2025-01-17T12:09:26.767157999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:09:26.767624 containerd[1463]: time="2025-01-17T12:09:26.767556614Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:09:26.767624 containerd[1463]: time="2025-01-17T12:09:26.767630650Z" level=info msg="Connect containerd service" Jan 17 12:09:26.767924 containerd[1463]: time="2025-01-17T12:09:26.767675675Z" level=info msg="using legacy CRI server" Jan 17 12:09:26.767924 containerd[1463]: time="2025-01-17T12:09:26.767684582Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:09:26.767924 containerd[1463]: time="2025-01-17T12:09:26.767798200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:09:26.768753 containerd[1463]: time="2025-01-17T12:09:26.768652765Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:09:26.768910 containerd[1463]: time="2025-01-17T12:09:26.768857496Z" level=info msg="Start subscribing containerd event" Jan 17 12:09:26.769040 containerd[1463]: time="2025-01-17T12:09:26.768927840Z" level=info msg="Start recovering state" Jan 17 12:09:26.769040 containerd[1463]: time="2025-01-17T12:09:26.769017959Z" level=info msg="Start event monitor" Jan 17 12:09:26.769104 containerd[1463]: time="2025-01-17T12:09:26.769082053Z" level=info msg="Start snapshots syncer" Jan 17 12:09:26.769512 containerd[1463]: time="2025-01-17T12:09:26.769107124Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:09:26.769512 containerd[1463]: time="2025-01-17T12:09:26.769188375Z" level=info msg="Start streaming server" Jan 17 12:09:26.769512 containerd[1463]: time="2025-01-17T12:09:26.769418695Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:09:26.769512 containerd[1463]: time="2025-01-17T12:09:26.769503927Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:09:26.769684 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:09:26.769992 containerd[1463]: time="2025-01-17T12:09:26.769842897Z" level=info msg="containerd successfully booted in 0.044151s" Jan 17 12:09:26.936974 tar[1454]: linux-amd64/LICENSE Jan 17 12:09:26.937128 tar[1454]: linux-amd64/README.md Jan 17 12:09:26.954265 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:09:27.197888 systemd-networkd[1385]: eth0: Gained IPv6LL Jan 17 12:09:27.202518 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:09:27.204529 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:09:27.213713 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:09:27.216699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:27.219185 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:09:27.237572 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:09:27.237840 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:09:27.240294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:09:27.246172 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:09:28.508405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:28.510326 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:09:28.511789 systemd[1]: Startup finished in 800ms (kernel) + 6.774s (initrd) + 5.065s (userspace) = 12.640s. Jan 17 12:09:28.514361 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:29.386952 kubelet[1548]: E0117 12:09:29.386868 1548 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:29.392176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:29.392368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:29.392764 systemd[1]: kubelet.service: Consumed 2.035s CPU time. Jan 17 12:09:31.167314 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:09:31.168804 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:57898.service - OpenSSH per-connection server daemon (10.0.0.1:57898). Jan 17 12:09:31.211294 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 57898 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:31.213217 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:31.221664 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:09:31.231703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:09:31.233613 systemd-logind[1445]: New session 1 of user core. Jan 17 12:09:31.246485 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:09:31.249649 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:09:31.260015 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:09:31.364464 systemd[1566]: Queued start job for default target default.target. Jan 17 12:09:31.379849 systemd[1566]: Created slice app.slice - User Application Slice. Jan 17 12:09:31.379885 systemd[1566]: Reached target paths.target - Paths. Jan 17 12:09:31.379904 systemd[1566]: Reached target timers.target - Timers. Jan 17 12:09:31.381607 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:09:31.392923 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:09:31.393084 systemd[1566]: Reached target sockets.target - Sockets. Jan 17 12:09:31.393108 systemd[1566]: Reached target basic.target - Basic System. Jan 17 12:09:31.393154 systemd[1566]: Reached target default.target - Main User Target. Jan 17 12:09:31.393196 systemd[1566]: Startup finished in 124ms. Jan 17 12:09:31.393774 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:09:31.395769 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:09:31.454394 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:57910.service - OpenSSH per-connection server daemon (10.0.0.1:57910). Jan 17 12:09:31.486797 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 57910 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:31.488358 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:31.492205 systemd-logind[1445]: New session 2 of user core. Jan 17 12:09:31.505586 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:09:31.559708 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:31.569296 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:57910.service: Deactivated successfully. Jan 17 12:09:31.571123 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:09:31.572877 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:09:31.584695 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:57922.service - OpenSSH per-connection server daemon (10.0.0.1:57922). Jan 17 12:09:31.585745 systemd-logind[1445]: Removed session 2. Jan 17 12:09:31.612756 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 57922 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:31.614617 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:31.618911 systemd-logind[1445]: New session 3 of user core. Jan 17 12:09:31.635712 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:09:31.685563 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:31.700312 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:57922.service: Deactivated successfully. Jan 17 12:09:31.701892 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:09:31.703612 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:09:31.710906 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:57924.service - OpenSSH per-connection server daemon (10.0.0.1:57924). Jan 17 12:09:31.711917 systemd-logind[1445]: Removed session 3. Jan 17 12:09:31.740654 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 57924 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:31.742190 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:31.746552 systemd-logind[1445]: New session 4 of user core. Jan 17 12:09:31.760763 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:09:31.815320 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:31.827149 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:57924.service: Deactivated successfully. Jan 17 12:09:31.828876 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:09:31.830641 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:09:31.840676 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:57934.service - OpenSSH per-connection server daemon (10.0.0.1:57934). Jan 17 12:09:31.841499 systemd-logind[1445]: Removed session 4. Jan 17 12:09:31.869194 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 57934 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:31.870690 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:31.874366 systemd-logind[1445]: New session 5 of user core. Jan 17 12:09:31.889559 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:09:31.948580 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:09:31.948930 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:31.961588 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:31.963337 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:31.973412 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:57934.service: Deactivated successfully. Jan 17 12:09:31.975188 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:09:31.976997 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:09:31.987712 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:57946.service - OpenSSH per-connection server daemon (10.0.0.1:57946). Jan 17 12:09:31.988634 systemd-logind[1445]: Removed session 5. Jan 17 12:09:32.015848 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 57946 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:32.017198 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:32.021112 systemd-logind[1445]: New session 6 of user core. Jan 17 12:09:32.030572 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:09:32.084119 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:09:32.084487 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:32.088409 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:32.094553 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:09:32.094878 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:32.113645 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:32.115286 auditctl[1613]: No rules Jan 17 12:09:32.115743 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:09:32.115946 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:32.118632 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:32.152054 augenrules[1631]: No rules Jan 17 12:09:32.154135 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:32.155509 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:32.157692 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:32.169989 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:57946.service: Deactivated successfully. Jan 17 12:09:32.172581 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:09:32.174874 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:09:32.183902 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:57948.service - OpenSSH per-connection server daemon (10.0.0.1:57948). Jan 17 12:09:32.185171 systemd-logind[1445]: Removed session 6. Jan 17 12:09:32.212085 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 57948 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:09:32.213900 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:32.218336 systemd-logind[1445]: New session 7 of user core. Jan 17 12:09:32.227750 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:09:32.282035 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:09:32.282381 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:32.608733 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:09:32.608948 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:09:32.891237 dockerd[1660]: time="2025-01-17T12:09:32.891089596Z" level=info msg="Starting up" Jan 17 12:09:33.254869 dockerd[1660]: time="2025-01-17T12:09:33.254705968Z" level=info msg="Loading containers: start." Jan 17 12:09:33.377475 kernel: Initializing XFRM netlink socket Jan 17 12:09:33.461383 systemd-networkd[1385]: docker0: Link UP Jan 17 12:09:33.486055 dockerd[1660]: time="2025-01-17T12:09:33.486008095Z" level=info msg="Loading containers: done." Jan 17 12:09:33.500594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3942822179-merged.mount: Deactivated successfully. Jan 17 12:09:33.503417 dockerd[1660]: time="2025-01-17T12:09:33.503352213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:09:33.503557 dockerd[1660]: time="2025-01-17T12:09:33.503509493Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:09:33.503706 dockerd[1660]: time="2025-01-17T12:09:33.503676371Z" level=info msg="Daemon has completed initialization" Jan 17 12:09:33.777125 dockerd[1660]: time="2025-01-17T12:09:33.776978989Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:09:33.777227 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:09:34.583158 containerd[1463]: time="2025-01-17T12:09:34.583106669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:09:35.904242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214828981.mount: Deactivated successfully. Jan 17 12:09:37.726508 containerd[1463]: time="2025-01-17T12:09:37.726457762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:37.727420 containerd[1463]: time="2025-01-17T12:09:37.727380534Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 17 12:09:37.728520 containerd[1463]: time="2025-01-17T12:09:37.728495429Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:37.731580 containerd[1463]: time="2025-01-17T12:09:37.731552133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:37.732798 containerd[1463]: time="2025-01-17T12:09:37.732768219Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 3.149613263s" Jan 17 12:09:37.732843 containerd[1463]: time="2025-01-17T12:09:37.732809172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 17 12:09:37.756581 containerd[1463]: time="2025-01-17T12:09:37.756523663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:09:39.462656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:39.476640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:39.706903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:39.710945 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:39.940199 kubelet[1888]: E0117 12:09:39.939766 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:39.948178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:39.948434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:40.605969 containerd[1463]: time="2025-01-17T12:09:40.605877429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:40.630525 containerd[1463]: time="2025-01-17T12:09:40.630432318Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 17 12:09:40.691361 containerd[1463]: time="2025-01-17T12:09:40.691324985Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:40.727989 containerd[1463]: time="2025-01-17T12:09:40.727939533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:40.729095 containerd[1463]: time="2025-01-17T12:09:40.729065401Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.972490704s" Jan 17 12:09:40.729159 containerd[1463]: time="2025-01-17T12:09:40.729094877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 17 12:09:40.777924 containerd[1463]: time="2025-01-17T12:09:40.777856916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:09:41.779892 containerd[1463]: time="2025-01-17T12:09:41.779837267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.780739 containerd[1463]: time="2025-01-17T12:09:41.780702302Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 17 12:09:41.782390 containerd[1463]: time="2025-01-17T12:09:41.782333435Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.785382 containerd[1463]: time="2025-01-17T12:09:41.785338816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:41.786435 containerd[1463]: time="2025-01-17T12:09:41.786372063Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.008470181s" Jan 17 12:09:41.786494 containerd[1463]: time="2025-01-17T12:09:41.786432105Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 17 12:09:41.810051 containerd[1463]: time="2025-01-17T12:09:41.809979881Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:09:42.710884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101685158.mount: Deactivated successfully. Jan 17 12:09:43.467093 containerd[1463]: time="2025-01-17T12:09:43.467030957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:43.468216 containerd[1463]: time="2025-01-17T12:09:43.468170035Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 17 12:09:43.469363 containerd[1463]: time="2025-01-17T12:09:43.469315069Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:43.471569 containerd[1463]: time="2025-01-17T12:09:43.471396370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:43.472120 containerd[1463]: time="2025-01-17T12:09:43.472077240Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.662048007s" Jan 17 12:09:43.472120 containerd[1463]: time="2025-01-17T12:09:43.472110659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 17 12:09:43.495258 containerd[1463]: time="2025-01-17T12:09:43.495206621Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:09:44.046895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542454215.mount: Deactivated successfully. Jan 17 12:09:44.798458 containerd[1463]: time="2025-01-17T12:09:44.798368130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:44.799248 containerd[1463]: time="2025-01-17T12:09:44.799187058Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:09:44.804332 containerd[1463]: time="2025-01-17T12:09:44.804269233Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:44.808206 containerd[1463]: time="2025-01-17T12:09:44.808143667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:44.809248 containerd[1463]: time="2025-01-17T12:09:44.809210325Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.313961346s" Jan 17 12:09:44.809321 containerd[1463]: time="2025-01-17T12:09:44.809246928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:09:44.833547 containerd[1463]: time="2025-01-17T12:09:44.833500101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:09:45.358198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043092042.mount: Deactivated successfully. Jan 17 12:09:45.364607 containerd[1463]: time="2025-01-17T12:09:45.364561285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.365611 containerd[1463]: time="2025-01-17T12:09:45.365568956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:09:45.366947 containerd[1463]: time="2025-01-17T12:09:45.366890636Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.371490 containerd[1463]: time="2025-01-17T12:09:45.371428018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:45.372522 containerd[1463]: time="2025-01-17T12:09:45.372488306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 538.946542ms" Jan 17 12:09:45.372522 containerd[1463]: time="2025-01-17T12:09:45.372524073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:09:45.394912 containerd[1463]: time="2025-01-17T12:09:45.394870375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:09:46.589033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170367227.mount: Deactivated successfully. Jan 17 12:09:48.762047 containerd[1463]: time="2025-01-17T12:09:48.761970168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:48.763088 containerd[1463]: time="2025-01-17T12:09:48.763007173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 17 12:09:48.764411 containerd[1463]: time="2025-01-17T12:09:48.764337438Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:48.768611 containerd[1463]: time="2025-01-17T12:09:48.768559797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:48.770027 containerd[1463]: time="2025-01-17T12:09:48.769975807Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.375057609s" Jan 17 12:09:48.770027 containerd[1463]: time="2025-01-17T12:09:48.770023070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 17 12:09:49.962942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:09:49.972755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:50.149108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:50.153659 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:50.279640 kubelet[2116]: E0117 12:09:50.279472 2116 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:50.284091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:50.284313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:51.114844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:51.124651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:51.142515 systemd[1]: Reloading requested from client PID 2132 ('systemctl') (unit session-7.scope)... Jan 17 12:09:51.142531 systemd[1]: Reloading... Jan 17 12:09:51.225371 zram_generator::config[2171]: No configuration found. Jan 17 12:09:51.650984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:51.757817 systemd[1]: Reloading finished in 614 ms. Jan 17 12:09:51.816015 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:09:51.816139 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:09:51.816701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:51.820634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:51.987502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:51.993172 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:52.043259 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:52.043259 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:52.043259 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:52.043769 kubelet[2220]: I0117 12:09:52.043330 2220 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:52.211286 kubelet[2220]: I0117 12:09:52.211239 2220 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:52.211286 kubelet[2220]: I0117 12:09:52.211271 2220 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:52.211565 kubelet[2220]: I0117 12:09:52.211541 2220 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:52.227060 kubelet[2220]: I0117 12:09:52.226980 2220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:52.227620 kubelet[2220]: E0117 12:09:52.227512 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.240607 kubelet[2220]: I0117 12:09:52.240456 2220 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:52.240766 kubelet[2220]: I0117 12:09:52.240721 2220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:52.242454 kubelet[2220]: I0117 12:09:52.240757 2220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:52.242454 kubelet[2220]: I0117 12:09:52.242076 2220 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:52.242454 kubelet[2220]: I0117 12:09:52.242088 2220 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:52.242454 kubelet[2220]: I0117 12:09:52.242245 2220 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:52.243338 kubelet[2220]: I0117 12:09:52.243192 2220 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:52.243338 kubelet[2220]: I0117 12:09:52.243223 2220 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:52.243338 kubelet[2220]: I0117 12:09:52.243255 2220 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:52.243338 kubelet[2220]: I0117 12:09:52.243324 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:52.247748 kubelet[2220]: W0117 12:09:52.247630 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.247748 kubelet[2220]: E0117 12:09:52.247706 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.247852 kubelet[2220]: W0117 12:09:52.247768 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.247852 kubelet[2220]: E0117 12:09:52.247834 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.248051 kubelet[2220]: I0117 12:09:52.248024 2220 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:52.249714 kubelet[2220]: I0117 12:09:52.249686 2220 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:52.249765 kubelet[2220]: W0117 12:09:52.249751 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:09:52.250559 kubelet[2220]: I0117 12:09:52.250395 2220 server.go:1264] "Started kubelet" Jan 17 12:09:52.250845 kubelet[2220]: I0117 12:09:52.250785 2220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:52.252142 kubelet[2220]: I0117 12:09:52.251227 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:52.252142 kubelet[2220]: I0117 12:09:52.251578 2220 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:52.253525 kubelet[2220]: I0117 12:09:52.252899 2220 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:52.254539 kubelet[2220]: E0117 12:09:52.254506 2220 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:09:52.255731 kubelet[2220]: I0117 12:09:52.254811 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:52.255731 kubelet[2220]: I0117 12:09:52.254903 2220 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:52.255731 kubelet[2220]: E0117 12:09:52.254913 2220 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:09:52.255731 kubelet[2220]: I0117 12:09:52.255040 2220 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:52.255731 kubelet[2220]: I0117 12:09:52.255108 2220 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:52.255731 kubelet[2220]: W0117 12:09:52.255483 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.255731 kubelet[2220]: E0117 12:09:52.255524 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.255731 kubelet[2220]: E0117 12:09:52.255699 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Jan 17 12:09:52.257758 kubelet[2220]: I0117 12:09:52.257722 2220 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:52.257758 kubelet[2220]: I0117 12:09:52.257739 2220 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:52.257882 kubelet[2220]: I0117 12:09:52.257810 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:52.258879 kubelet[2220]: E0117 12:09:52.258716 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b799c02ce120a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:09:52.250376714 +0000 UTC m=+0.252329509,LastTimestamp:2025-01-17 12:09:52.250376714 +0000 UTC m=+0.252329509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:09:52.271666 kubelet[2220]: I0117 12:09:52.271643 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:52.271666 kubelet[2220]: I0117 12:09:52.271658 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:52.271766 kubelet[2220]: I0117 12:09:52.271679 2220 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:52.273650 kubelet[2220]: I0117 12:09:52.273613 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:52.275494 kubelet[2220]: I0117 12:09:52.275425 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:52.275593 kubelet[2220]: I0117 12:09:52.275577 2220 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:52.275628 kubelet[2220]: I0117 12:09:52.275605 2220 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:52.275666 kubelet[2220]: E0117 12:09:52.275653 2220 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:52.276619 kubelet[2220]: W0117 12:09:52.276529 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.276665 kubelet[2220]: E0117 12:09:52.276634 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:52.357705 kubelet[2220]: I0117 12:09:52.357668 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:52.358233 kubelet[2220]: E0117 12:09:52.358178 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 17 12:09:52.376343 kubelet[2220]: E0117 12:09:52.376243 2220 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:09:52.457306 kubelet[2220]: E0117 12:09:52.457249 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Jan 17 12:09:52.560694 kubelet[2220]: I0117 12:09:52.560563 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:52.560978 kubelet[2220]: E0117 12:09:52.560944 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 17 12:09:52.577225 kubelet[2220]: E0117 12:09:52.577133 2220 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:09:52.666678 kubelet[2220]: I0117 12:09:52.666605 2220 policy_none.go:49] "None policy: Start" Jan 17 12:09:52.667704 kubelet[2220]: I0117 12:09:52.667678 2220 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:52.667800 kubelet[2220]: I0117 12:09:52.667709 2220 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:52.679226 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:09:52.692654 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:09:52.697095 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:09:52.709038 kubelet[2220]: I0117 12:09:52.708853 2220 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:52.709485 kubelet[2220]: I0117 12:09:52.709203 2220 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:52.709485 kubelet[2220]: I0117 12:09:52.709410 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:52.710936 kubelet[2220]: E0117 12:09:52.710902 2220 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:09:52.858758 kubelet[2220]: E0117 12:09:52.858565 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Jan 17 12:09:52.962931 kubelet[2220]: I0117 12:09:52.962893 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:52.963481 kubelet[2220]: E0117 12:09:52.963393 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 17 12:09:52.977524 kubelet[2220]: I0117 12:09:52.977463 2220 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:09:52.978610 kubelet[2220]: I0117 12:09:52.978579 2220 topology_manager.go:215] "Topology Admit Handler" podUID="ecfd06a7fa73befa11631a1b212755a1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:52.979538 kubelet[2220]: I0117 12:09:52.979517 2220 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:52.989794 systemd[1]: Created slice kubepods-burstable-podecfd06a7fa73befa11631a1b212755a1.slice - libcontainer container kubepods-burstable-podecfd06a7fa73befa11631a1b212755a1.slice. Jan 17 12:09:53.001729 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 17 12:09:53.005731 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 17 12:09:53.059863 kubelet[2220]: I0117 12:09:53.059811 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:53.059863 kubelet[2220]: I0117 12:09:53.059863 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:53.060318 kubelet[2220]: I0117 12:09:53.059892 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:53.060318 kubelet[2220]: I0117 12:09:53.059915 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:53.060318 kubelet[2220]: I0117 12:09:53.059980 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:53.060318 kubelet[2220]: I0117 12:09:53.060086 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:09:53.060318 kubelet[2220]: I0117 12:09:53.060133 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:53.060438 kubelet[2220]: I0117 12:09:53.060156 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:09:53.060438 kubelet[2220]: I0117 12:09:53.060179 2220 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:09:53.106438 kubelet[2220]: W0117 12:09:53.106363 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.106572 kubelet[2220]: E0117 12:09:53.106476 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.156256 kubelet[2220]: W0117 12:09:53.156070 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.156256 kubelet[2220]: E0117 12:09:53.156171 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.199080 kubelet[2220]: W0117 12:09:53.198961 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.199080 kubelet[2220]: E0117 12:09:53.199062 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.299996 kubelet[2220]: E0117 12:09:53.299933 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:53.300789 containerd[1463]: time="2025-01-17T12:09:53.300745268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecfd06a7fa73befa11631a1b212755a1,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:53.305056 kubelet[2220]: E0117 12:09:53.305020 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:53.305542 containerd[1463]: time="2025-01-17T12:09:53.305498034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:53.308783 kubelet[2220]: E0117 12:09:53.308754 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:53.309137 containerd[1463]: time="2025-01-17T12:09:53.309108641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 17 12:09:53.466073 kubelet[2220]: W0117 12:09:53.465973 2220 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.466073 kubelet[2220]: E0117 12:09:53.466074 2220 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:53.660235 kubelet[2220]: E0117 12:09:53.660157 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Jan 17 12:09:53.765133 kubelet[2220]: I0117 12:09:53.765016 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:53.765429 kubelet[2220]: E0117 12:09:53.765382 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 17 12:09:54.293585 kubelet[2220]: E0117 12:09:54.293526 2220 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Jan 17 12:09:54.689958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913060340.mount: Deactivated successfully. Jan 17 12:09:54.698704 containerd[1463]: time="2025-01-17T12:09:54.698609959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:54.701342 containerd[1463]: time="2025-01-17T12:09:54.701248521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:54.702684 containerd[1463]: time="2025-01-17T12:09:54.702626699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:54.703736 containerd[1463]: time="2025-01-17T12:09:54.703701117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:54.705475 containerd[1463]: time="2025-01-17T12:09:54.705372606Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:54.706916 containerd[1463]: time="2025-01-17T12:09:54.706799642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:09:54.708385 containerd[1463]: time="2025-01-17T12:09:54.708330520Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:09:54.712407 containerd[1463]: time="2025-01-17T12:09:54.712243388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:09:54.712899 containerd[1463]: time="2025-01-17T12:09:54.712805120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.403635554s" Jan 17 12:09:54.713932 containerd[1463]: time="2025-01-17T12:09:54.713873033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.40829479s" Jan 17 12:09:54.717804 containerd[1463]: time="2025-01-17T12:09:54.717683660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.416835628s" Jan 17 12:09:55.045277 containerd[1463]: time="2025-01-17T12:09:55.045147978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:55.045682 containerd[1463]: time="2025-01-17T12:09:55.045492223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:55.045682 containerd[1463]: time="2025-01-17T12:09:55.045559330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.046086 containerd[1463]: time="2025-01-17T12:09:55.045953088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.047424 containerd[1463]: time="2025-01-17T12:09:55.047155789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:55.047424 containerd[1463]: time="2025-01-17T12:09:55.047224767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:55.047424 containerd[1463]: time="2025-01-17T12:09:55.047240261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.047639 containerd[1463]: time="2025-01-17T12:09:55.047575228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.048426 containerd[1463]: time="2025-01-17T12:09:55.047918262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:09:55.048426 containerd[1463]: time="2025-01-17T12:09:55.047985429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:09:55.048426 containerd[1463]: time="2025-01-17T12:09:55.048003295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.048426 containerd[1463]: time="2025-01-17T12:09:55.048101800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:09:55.097643 systemd[1]: Started cri-containerd-0977dda495c01b31507863ec5746bb81a92bdf80be1d25a25dff65a54f318179.scope - libcontainer container 0977dda495c01b31507863ec5746bb81a92bdf80be1d25a25dff65a54f318179. Jan 17 12:09:55.102599 systemd[1]: Started cri-containerd-a321c2954d4f37d85a51b8f796f839502e68446ec4cc14823bb7a53b0c51d9c4.scope - libcontainer container a321c2954d4f37d85a51b8f796f839502e68446ec4cc14823bb7a53b0c51d9c4. Jan 17 12:09:55.105737 systemd[1]: Started cri-containerd-b8ca3253e69870c502ae22b20e2653a6f75171d01b4b4d2bf02c0f45d36b7297.scope - libcontainer container b8ca3253e69870c502ae22b20e2653a6f75171d01b4b4d2bf02c0f45d36b7297. Jan 17 12:09:55.203835 containerd[1463]: time="2025-01-17T12:09:55.203761065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8ca3253e69870c502ae22b20e2653a6f75171d01b4b4d2bf02c0f45d36b7297\"" Jan 17 12:09:55.206718 containerd[1463]: time="2025-01-17T12:09:55.205627635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a321c2954d4f37d85a51b8f796f839502e68446ec4cc14823bb7a53b0c51d9c4\"" Jan 17 12:09:55.206799 kubelet[2220]: E0117 12:09:55.206212 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:55.207564 kubelet[2220]: E0117 12:09:55.207531 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:55.211391 containerd[1463]: time="2025-01-17T12:09:55.210938848Z" level=info msg="CreateContainer within sandbox \"b8ca3253e69870c502ae22b20e2653a6f75171d01b4b4d2bf02c0f45d36b7297\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:09:55.211643 containerd[1463]: time="2025-01-17T12:09:55.211197980Z" level=info msg="CreateContainer within sandbox \"a321c2954d4f37d85a51b8f796f839502e68446ec4cc14823bb7a53b0c51d9c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:09:55.215879 containerd[1463]: time="2025-01-17T12:09:55.215836816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecfd06a7fa73befa11631a1b212755a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0977dda495c01b31507863ec5746bb81a92bdf80be1d25a25dff65a54f318179\"" Jan 17 12:09:55.216987 kubelet[2220]: E0117 12:09:55.216955 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:55.219439 containerd[1463]: time="2025-01-17T12:09:55.219392283Z" level=info msg="CreateContainer within sandbox \"0977dda495c01b31507863ec5746bb81a92bdf80be1d25a25dff65a54f318179\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:09:55.245658 containerd[1463]: time="2025-01-17T12:09:55.245594281Z" level=info msg="CreateContainer within sandbox \"a321c2954d4f37d85a51b8f796f839502e68446ec4cc14823bb7a53b0c51d9c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9086e87208bd2f44880b145e901a53c6bb41164777d721995585fefd11f69ea2\"" Jan 17 12:09:55.246495 containerd[1463]: time="2025-01-17T12:09:55.246430098Z" level=info msg="StartContainer for \"9086e87208bd2f44880b145e901a53c6bb41164777d721995585fefd11f69ea2\"" Jan 17 12:09:55.249259 containerd[1463]: time="2025-01-17T12:09:55.249183457Z" level=info msg="CreateContainer within sandbox \"b8ca3253e69870c502ae22b20e2653a6f75171d01b4b4d2bf02c0f45d36b7297\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2349c808e43b923e9b8829acfff9d1a68a37623b17620735daf6b06e4e9a5d12\"" Jan 17 12:09:55.249917 containerd[1463]: time="2025-01-17T12:09:55.249691874Z" level=info msg="StartContainer for \"2349c808e43b923e9b8829acfff9d1a68a37623b17620735daf6b06e4e9a5d12\"" Jan 17 12:09:55.251408 containerd[1463]: time="2025-01-17T12:09:55.251381131Z" level=info msg="CreateContainer within sandbox \"0977dda495c01b31507863ec5746bb81a92bdf80be1d25a25dff65a54f318179\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba21fc8725c847cb1b0d26eb786c70ccd50ee2211893559af1665a5190bd32a2\"" Jan 17 12:09:55.251866 containerd[1463]: time="2025-01-17T12:09:55.251842636Z" level=info msg="StartContainer for \"ba21fc8725c847cb1b0d26eb786c70ccd50ee2211893559af1665a5190bd32a2\"" Jan 17 12:09:55.262278 kubelet[2220]: E0117 12:09:55.261542 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="3.2s" Jan 17 12:09:55.338750 systemd[1]: Started cri-containerd-9086e87208bd2f44880b145e901a53c6bb41164777d721995585fefd11f69ea2.scope - libcontainer container 9086e87208bd2f44880b145e901a53c6bb41164777d721995585fefd11f69ea2. Jan 17 12:09:55.350760 systemd[1]: Started cri-containerd-2349c808e43b923e9b8829acfff9d1a68a37623b17620735daf6b06e4e9a5d12.scope - libcontainer container 2349c808e43b923e9b8829acfff9d1a68a37623b17620735daf6b06e4e9a5d12. Jan 17 12:09:55.352526 systemd[1]: Started cri-containerd-ba21fc8725c847cb1b0d26eb786c70ccd50ee2211893559af1665a5190bd32a2.scope - libcontainer container ba21fc8725c847cb1b0d26eb786c70ccd50ee2211893559af1665a5190bd32a2. Jan 17 12:09:55.368339 kubelet[2220]: I0117 12:09:55.367600 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:55.368339 kubelet[2220]: E0117 12:09:55.367980 2220 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Jan 17 12:09:55.423716 containerd[1463]: time="2025-01-17T12:09:55.423644429Z" level=info msg="StartContainer for \"2349c808e43b923e9b8829acfff9d1a68a37623b17620735daf6b06e4e9a5d12\" returns successfully" Jan 17 12:09:55.423837 containerd[1463]: time="2025-01-17T12:09:55.423755954Z" level=info msg="StartContainer for \"9086e87208bd2f44880b145e901a53c6bb41164777d721995585fefd11f69ea2\" returns successfully" Jan 17 12:09:55.440556 containerd[1463]: time="2025-01-17T12:09:55.438781172Z" level=info msg="StartContainer for \"ba21fc8725c847cb1b0d26eb786c70ccd50ee2211893559af1665a5190bd32a2\" returns successfully" Jan 17 12:09:56.292286 kubelet[2220]: E0117 12:09:56.291992 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:56.295605 kubelet[2220]: E0117 12:09:56.295584 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:56.299410 kubelet[2220]: E0117 12:09:56.299332 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.197911 kubelet[2220]: E0117 12:09:57.197855 2220 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:09:57.250644 kubelet[2220]: I0117 12:09:57.250597 2220 apiserver.go:52] "Watching apiserver" Jan 17 12:09:57.255890 kubelet[2220]: I0117 12:09:57.255847 2220 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:09:57.300618 kubelet[2220]: E0117 12:09:57.300588 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.300804 kubelet[2220]: E0117 12:09:57.300665 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.300940 kubelet[2220]: E0117 12:09:57.300902 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:57.549797 kubelet[2220]: E0117 12:09:57.549739 2220 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:09:57.989828 kubelet[2220]: E0117 12:09:57.989788 2220 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:09:58.302262 kubelet[2220]: E0117 12:09:58.302153 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:58.302633 kubelet[2220]: E0117 12:09:58.302440 2220 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:09:58.466569 kubelet[2220]: E0117 12:09:58.466528 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:09:58.570758 kubelet[2220]: I0117 12:09:58.570617 2220 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:58.577091 kubelet[2220]: I0117 12:09:58.577063 2220 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:59.283750 systemd[1]: Reloading requested from client PID 2501 ('systemctl') (unit session-7.scope)... Jan 17 12:09:59.283766 systemd[1]: Reloading... Jan 17 12:09:59.363553 zram_generator::config[2544]: No configuration found. Jan 17 12:09:59.478727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:59.572324 systemd[1]: Reloading finished in 288 ms. Jan 17 12:09:59.616948 kubelet[2220]: I0117 12:09:59.616845 2220 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:59.617012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:59.635765 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:09:59.636054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:59.650862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:59.798753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:59.804154 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:09:59.851554 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:59.851554 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:09:59.851554 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:09:59.851554 kubelet[2585]: I0117 12:09:59.851520 2585 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:09:59.857250 kubelet[2585]: I0117 12:09:59.857210 2585 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:09:59.857250 kubelet[2585]: I0117 12:09:59.857241 2585 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:09:59.857554 kubelet[2585]: I0117 12:09:59.857515 2585 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:09:59.859065 kubelet[2585]: I0117 12:09:59.859033 2585 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:09:59.860433 kubelet[2585]: I0117 12:09:59.860393 2585 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:09:59.871150 kubelet[2585]: I0117 12:09:59.871106 2585 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:09:59.871425 kubelet[2585]: I0117 12:09:59.871356 2585 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:09:59.871681 kubelet[2585]: I0117 12:09:59.871415 2585 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:09:59.871764 kubelet[2585]: I0117 12:09:59.871700 2585 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:09:59.871764 kubelet[2585]: I0117 12:09:59.871713 2585 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:09:59.871812 kubelet[2585]: I0117 12:09:59.871770 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:59.871886 kubelet[2585]: I0117 12:09:59.871869 2585 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:09:59.871886 kubelet[2585]: I0117 12:09:59.871885 2585 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:09:59.871929 kubelet[2585]: I0117 12:09:59.871907 2585 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:09:59.871929 kubelet[2585]: I0117 12:09:59.871927 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:09:59.873639 kubelet[2585]: I0117 12:09:59.873592 2585 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:09:59.873823 kubelet[2585]: I0117 12:09:59.873800 2585 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:09:59.874219 kubelet[2585]: I0117 12:09:59.874175 2585 server.go:1264] "Started kubelet" Jan 17 12:09:59.875870 kubelet[2585]: I0117 12:09:59.875845 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:09:59.879830 kubelet[2585]: I0117 12:09:59.879798 2585 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:09:59.879957 kubelet[2585]: I0117 12:09:59.879898 2585 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:09:59.880074 kubelet[2585]: I0117 12:09:59.880057 2585 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:09:59.881193 kubelet[2585]: I0117 12:09:59.881149 2585 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:09:59.882108 kubelet[2585]: I0117 12:09:59.882051 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:09:59.882932 kubelet[2585]: I0117 12:09:59.882337 2585 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:09:59.883280 kubelet[2585]: I0117 12:09:59.883264 2585 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:09:59.885920 kubelet[2585]: I0117 12:09:59.885887 2585 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:09:59.886034 kubelet[2585]: I0117 12:09:59.886005 2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:09:59.887711 kubelet[2585]: E0117 12:09:59.887682 2585 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:09:59.891700 kubelet[2585]: I0117 12:09:59.891402 2585 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:09:59.894730 kubelet[2585]: I0117 12:09:59.893097 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:09:59.894862 kubelet[2585]: I0117 12:09:59.894748 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:09:59.894862 kubelet[2585]: I0117 12:09:59.894781 2585 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:09:59.894862 kubelet[2585]: I0117 12:09:59.894802 2585 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:09:59.894862 kubelet[2585]: E0117 12:09:59.894852 2585 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:09:59.930651 kubelet[2585]: I0117 12:09:59.930614 2585 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:09:59.930651 kubelet[2585]: I0117 12:09:59.930634 2585 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:09:59.930651 kubelet[2585]: I0117 12:09:59.930654 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:09:59.930964 kubelet[2585]: I0117 12:09:59.930821 2585 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:09:59.930964 kubelet[2585]: I0117 12:09:59.930834 2585 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:09:59.930964 kubelet[2585]: I0117 12:09:59.930857 2585 policy_none.go:49] "None policy: Start" Jan 17 12:09:59.931687 kubelet[2585]: I0117 12:09:59.931667 2585 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:09:59.931746 kubelet[2585]: I0117 12:09:59.931705 2585 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:09:59.931844 kubelet[2585]: I0117 12:09:59.931826 2585 state_mem.go:75] "Updated machine memory state" Jan 17 12:09:59.931941 sudo[2616]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:09:59.932346 sudo[2616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:09:59.938124 kubelet[2585]: I0117 12:09:59.937973 2585 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:09:59.938350 kubelet[2585]: I0117 12:09:59.938314 2585 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:09:59.938614 kubelet[2585]: I0117 12:09:59.938468 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:09:59.985132 kubelet[2585]: I0117 12:09:59.985093 2585 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:09:59.991854 kubelet[2585]: I0117 12:09:59.991811 2585 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:09:59.991970 kubelet[2585]: I0117 12:09:59.991913 2585 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:09:59.994958 kubelet[2585]: I0117 12:09:59.994928 2585 topology_manager.go:215] "Topology Admit Handler" podUID="ecfd06a7fa73befa11631a1b212755a1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:09:59.995019 kubelet[2585]: I0117 12:09:59.995010 2585 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:09:59.995086 kubelet[2585]: I0117 12:09:59.995071 2585 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:10:00.181495 kubelet[2585]: I0117 12:10:00.181336 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:10:00.181495 kubelet[2585]: I0117 12:10:00.181382 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:10:00.181495 kubelet[2585]: I0117 12:10:00.181405 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:10:00.181495 kubelet[2585]: I0117 12:10:00.181419 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:10:00.181495 kubelet[2585]: I0117 12:10:00.181437 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:10:00.181765 kubelet[2585]: I0117 12:10:00.181468 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:10:00.181765 kubelet[2585]: I0117 12:10:00.181485 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:10:00.181765 kubelet[2585]: I0117 12:10:00.181500 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:10:00.181765 kubelet[2585]: I0117 12:10:00.181528 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecfd06a7fa73befa11631a1b212755a1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecfd06a7fa73befa11631a1b212755a1\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:10:00.307164 kubelet[2585]: E0117 12:10:00.307104 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:00.307978 kubelet[2585]: E0117 12:10:00.307941 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:00.308895 kubelet[2585]: E0117 12:10:00.308875 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:00.872471 kubelet[2585]: I0117 12:10:00.872412 2585 apiserver.go:52] "Watching apiserver" Jan 17 12:10:00.880128 kubelet[2585]: I0117 12:10:00.880102 2585 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:10:00.890523 sudo[2616]: pam_unix(sudo:session): session closed for user root Jan 17 12:10:00.909099 kubelet[2585]: E0117 12:10:00.908388 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:00.909099 kubelet[2585]: E0117 12:10:00.908723 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:01.221848 kubelet[2585]: E0117 12:10:01.220873 2585 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:10:01.221848 kubelet[2585]: E0117 12:10:01.221732 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:01.278353 kubelet[2585]: I0117 12:10:01.277520 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.277491771 podStartE2EDuration="1.277491771s" podCreationTimestamp="2025-01-17 12:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:01.277171871 +0000 UTC m=+1.469107411" watchObservedRunningTime="2025-01-17 12:10:01.277491771 +0000 UTC m=+1.469427301" Jan 17 12:10:01.312087 kubelet[2585]: I0117 12:10:01.311718 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.311696948 podStartE2EDuration="1.311696948s" podCreationTimestamp="2025-01-17 12:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:01.304015251 +0000 UTC m=+1.495950781" watchObservedRunningTime="2025-01-17 12:10:01.311696948 +0000 UTC m=+1.503632478" Jan 17 12:10:01.312087 kubelet[2585]: I0117 12:10:01.311831 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3118268419999999 podStartE2EDuration="1.311826842s" podCreationTimestamp="2025-01-17 12:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:01.311643264 +0000 UTC m=+1.503578794" watchObservedRunningTime="2025-01-17 12:10:01.311826842 +0000 UTC m=+1.503762372" Jan 17 12:10:01.910272 kubelet[2585]: E0117 12:10:01.910232 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:02.747501 sudo[1642]: pam_unix(sudo:session): session closed for user root Jan 17 12:10:02.750235 sshd[1639]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:02.755045 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:57948.service: Deactivated successfully. Jan 17 12:10:02.757531 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:10:02.757763 systemd[1]: session-7.scope: Consumed 4.769s CPU time, 192.1M memory peak, 0B memory swap peak. Jan 17 12:10:02.758575 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:10:02.759873 systemd-logind[1445]: Removed session 7. Jan 17 12:10:08.833386 kubelet[2585]: E0117 12:10:08.833341 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:08.921094 kubelet[2585]: E0117 12:10:08.921061 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:09.549696 kubelet[2585]: E0117 12:10:09.549621 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:09.894481 kubelet[2585]: E0117 12:10:09.894324 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:09.922898 kubelet[2585]: E0117 12:10:09.922855 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:09.923025 kubelet[2585]: E0117 12:10:09.922949 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:11.876663 update_engine[1450]: I20250117 12:10:11.876579 1450 update_attempter.cc:509] Updating boot flags... Jan 17 12:10:12.045489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2669) Jan 17 12:10:12.419033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2671) Jan 17 12:10:12.460240 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2671) Jan 17 12:10:13.516498 kubelet[2585]: I0117 12:10:13.516422 2585 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:10:13.517079 containerd[1463]: time="2025-01-17T12:10:13.516946428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:10:13.517359 kubelet[2585]: I0117 12:10:13.517258 2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:10:14.495905 kubelet[2585]: I0117 12:10:14.495775 2585 topology_manager.go:215] "Topology Admit Handler" podUID="f84aafcd-e696-42b1-8d78-46c9ae96f81d" podNamespace="kube-system" podName="kube-proxy-j66r5" Jan 17 12:10:14.501869 kubelet[2585]: I0117 12:10:14.501832 2585 topology_manager.go:215] "Topology Admit Handler" podUID="f754e5db-1b09-4249-838a-1341e83f7508" podNamespace="kube-system" podName="cilium-cwltp" Jan 17 12:10:14.509659 systemd[1]: Created slice kubepods-besteffort-podf84aafcd_e696_42b1_8d78_46c9ae96f81d.slice - libcontainer container kubepods-besteffort-podf84aafcd_e696_42b1_8d78_46c9ae96f81d.slice. Jan 17 12:10:14.531110 systemd[1]: Created slice kubepods-burstable-podf754e5db_1b09_4249_838a_1341e83f7508.slice - libcontainer container kubepods-burstable-podf754e5db_1b09_4249_838a_1341e83f7508.slice. Jan 17 12:10:14.574286 kubelet[2585]: I0117 12:10:14.574232 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-net\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574286 kubelet[2585]: I0117 12:10:14.574269 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f84aafcd-e696-42b1-8d78-46c9ae96f81d-kube-proxy\") pod \"kube-proxy-j66r5\" (UID: \"f84aafcd-e696-42b1-8d78-46c9ae96f81d\") " pod="kube-system/kube-proxy-j66r5" Jan 17 12:10:14.574286 kubelet[2585]: I0117 12:10:14.574289 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-bpf-maps\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574311 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26vl\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-kube-api-access-c26vl\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574425 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-run\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574524 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-xtables-lock\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574554 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-lib-modules\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574578 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f754e5db-1b09-4249-838a-1341e83f7508-cilium-config-path\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574710 kubelet[2585]: I0117 12:10:14.574604 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84aafcd-e696-42b1-8d78-46c9ae96f81d-lib-modules\") pod \"kube-proxy-j66r5\" (UID: \"f84aafcd-e696-42b1-8d78-46c9ae96f81d\") " pod="kube-system/kube-proxy-j66r5" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574658 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxx8\" (UniqueName: \"kubernetes.io/projected/f84aafcd-e696-42b1-8d78-46c9ae96f81d-kube-api-access-9bxx8\") pod \"kube-proxy-j66r5\" (UID: \"f84aafcd-e696-42b1-8d78-46c9ae96f81d\") " pod="kube-system/kube-proxy-j66r5" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574703 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cni-path\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574726 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-etc-cni-netd\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574750 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-kernel\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574774 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-hostproc\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574867 kubelet[2585]: I0117 12:10:14.574808 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-cgroup\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574994 kubelet[2585]: I0117 12:10:14.574829 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84aafcd-e696-42b1-8d78-46c9ae96f81d-xtables-lock\") pod \"kube-proxy-j66r5\" (UID: \"f84aafcd-e696-42b1-8d78-46c9ae96f81d\") " pod="kube-system/kube-proxy-j66r5" Jan 17 12:10:14.574994 kubelet[2585]: I0117 12:10:14.574874 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f754e5db-1b09-4249-838a-1341e83f7508-clustermesh-secrets\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.574994 kubelet[2585]: I0117 12:10:14.574898 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-hubble-tls\") pod \"cilium-cwltp\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " pod="kube-system/cilium-cwltp" Jan 17 12:10:14.600587 kubelet[2585]: I0117 12:10:14.600071 2585 topology_manager.go:215] "Topology Admit Handler" podUID="170045f2-26b9-4d89-bcf9-a166bac32790" podNamespace="kube-system" podName="cilium-operator-599987898-zlk99" Jan 17 12:10:14.607131 systemd[1]: Created slice kubepods-besteffort-pod170045f2_26b9_4d89_bcf9_a166bac32790.slice - libcontainer container kubepods-besteffort-pod170045f2_26b9_4d89_bcf9_a166bac32790.slice. Jan 17 12:10:14.675730 kubelet[2585]: I0117 12:10:14.675655 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fjc\" (UniqueName: \"kubernetes.io/projected/170045f2-26b9-4d89-bcf9-a166bac32790-kube-api-access-h2fjc\") pod \"cilium-operator-599987898-zlk99\" (UID: \"170045f2-26b9-4d89-bcf9-a166bac32790\") " pod="kube-system/cilium-operator-599987898-zlk99" Jan 17 12:10:14.676030 kubelet[2585]: I0117 12:10:14.675947 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/170045f2-26b9-4d89-bcf9-a166bac32790-cilium-config-path\") pod \"cilium-operator-599987898-zlk99\" (UID: \"170045f2-26b9-4d89-bcf9-a166bac32790\") " pod="kube-system/cilium-operator-599987898-zlk99" Jan 17 12:10:14.829731 kubelet[2585]: E0117 12:10:14.829574 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:14.830463 containerd[1463]: time="2025-01-17T12:10:14.830402056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j66r5,Uid:f84aafcd-e696-42b1-8d78-46c9ae96f81d,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:14.834297 kubelet[2585]: E0117 12:10:14.834264 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:14.834874 containerd[1463]: time="2025-01-17T12:10:14.834822155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwltp,Uid:f754e5db-1b09-4249-838a-1341e83f7508,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:14.906190 containerd[1463]: time="2025-01-17T12:10:14.906067597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:14.906190 containerd[1463]: time="2025-01-17T12:10:14.906149146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:14.906190 containerd[1463]: time="2025-01-17T12:10:14.906165031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.906576 containerd[1463]: time="2025-01-17T12:10:14.906279672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.951362 kubelet[2585]: E0117 12:10:14.950845 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:14.952320 containerd[1463]: time="2025-01-17T12:10:14.952282980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zlk99,Uid:170045f2-26b9-4d89-bcf9-a166bac32790,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:14.970995 containerd[1463]: time="2025-01-17T12:10:14.970880095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:14.970995 containerd[1463]: time="2025-01-17T12:10:14.970951467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:14.971159 containerd[1463]: time="2025-01-17T12:10:14.970982666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.971159 containerd[1463]: time="2025-01-17T12:10:14.971109576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.971900 systemd[1]: Started cri-containerd-b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4.scope - libcontainer container b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4. Jan 17 12:10:14.990116 containerd[1463]: time="2025-01-17T12:10:14.989996628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:14.990116 containerd[1463]: time="2025-01-17T12:10:14.990060338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:14.990116 containerd[1463]: time="2025-01-17T12:10:14.990080300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.990339 containerd[1463]: time="2025-01-17T12:10:14.990243106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:14.997679 systemd[1]: Started cri-containerd-cc78fb7a72221fd61a0a956b2101db99926524bb85ba80d003ed6826ab424f9a.scope - libcontainer container cc78fb7a72221fd61a0a956b2101db99926524bb85ba80d003ed6826ab424f9a. Jan 17 12:10:15.010049 systemd[1]: Started cri-containerd-8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6.scope - libcontainer container 8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6. Jan 17 12:10:15.023729 containerd[1463]: time="2025-01-17T12:10:15.023550758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cwltp,Uid:f754e5db-1b09-4249-838a-1341e83f7508,Namespace:kube-system,Attempt:0,} returns sandbox id \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\"" Jan 17 12:10:15.028331 kubelet[2585]: E0117 12:10:15.028252 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:15.032886 containerd[1463]: time="2025-01-17T12:10:15.032841110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j66r5,Uid:f84aafcd-e696-42b1-8d78-46c9ae96f81d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc78fb7a72221fd61a0a956b2101db99926524bb85ba80d003ed6826ab424f9a\"" Jan 17 12:10:15.033902 kubelet[2585]: E0117 12:10:15.033879 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:15.046562 containerd[1463]: time="2025-01-17T12:10:15.046333878Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:10:15.048511 containerd[1463]: time="2025-01-17T12:10:15.048307097Z" level=info msg="CreateContainer within sandbox \"cc78fb7a72221fd61a0a956b2101db99926524bb85ba80d003ed6826ab424f9a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:10:15.057424 containerd[1463]: time="2025-01-17T12:10:15.057382276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zlk99,Uid:170045f2-26b9-4d89-bcf9-a166bac32790,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\"" Jan 17 12:10:15.058336 kubelet[2585]: E0117 12:10:15.058306 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:15.276623 containerd[1463]: time="2025-01-17T12:10:15.276549263Z" level=info msg="CreateContainer within sandbox \"cc78fb7a72221fd61a0a956b2101db99926524bb85ba80d003ed6826ab424f9a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6d1a89434918a6e563b3ccd417342c107789482b3bbc197243b1dfa87109975\"" Jan 17 12:10:15.280756 containerd[1463]: time="2025-01-17T12:10:15.280714980Z" level=info msg="StartContainer for \"c6d1a89434918a6e563b3ccd417342c107789482b3bbc197243b1dfa87109975\"" Jan 17 12:10:15.313691 systemd[1]: Started cri-containerd-c6d1a89434918a6e563b3ccd417342c107789482b3bbc197243b1dfa87109975.scope - libcontainer container c6d1a89434918a6e563b3ccd417342c107789482b3bbc197243b1dfa87109975. Jan 17 12:10:15.347460 containerd[1463]: time="2025-01-17T12:10:15.347403240Z" level=info msg="StartContainer for \"c6d1a89434918a6e563b3ccd417342c107789482b3bbc197243b1dfa87109975\" returns successfully" Jan 17 12:10:15.965058 kubelet[2585]: E0117 12:10:15.965008 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:24.272669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860911166.mount: Deactivated successfully. Jan 17 12:10:27.091823 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:47520.service - OpenSSH per-connection server daemon (10.0.0.1:47520). Jan 17 12:10:27.369780 containerd[1463]: time="2025-01-17T12:10:27.369662627Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:27.370909 containerd[1463]: time="2025-01-17T12:10:27.370850772Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735275" Jan 17 12:10:27.372019 containerd[1463]: time="2025-01-17T12:10:27.371961833Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:27.373543 containerd[1463]: time="2025-01-17T12:10:27.373504176Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.327126597s" Jan 17 12:10:27.373603 containerd[1463]: time="2025-01-17T12:10:27.373542553Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:10:27.374914 sshd[2987]: Accepted publickey for core from 10.0.0.1 port 47520 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:27.375492 containerd[1463]: time="2025-01-17T12:10:27.375466353Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:10:27.377143 sshd[2987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:27.381403 containerd[1463]: time="2025-01-17T12:10:27.381367368Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:10:27.383244 systemd-logind[1445]: New session 8 of user core. Jan 17 12:10:27.389632 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:10:27.397239 containerd[1463]: time="2025-01-17T12:10:27.397198675Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\"" Jan 17 12:10:27.397787 containerd[1463]: time="2025-01-17T12:10:27.397733319Z" level=info msg="StartContainer for \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\"" Jan 17 12:10:27.438832 systemd[1]: Started cri-containerd-d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3.scope - libcontainer container d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3. Jan 17 12:10:27.471565 containerd[1463]: time="2025-01-17T12:10:27.471461706Z" level=info msg="StartContainer for \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\" returns successfully" Jan 17 12:10:27.485470 systemd[1]: cri-containerd-d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3.scope: Deactivated successfully. Jan 17 12:10:27.525251 sshd[2987]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:27.529791 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:47520.service: Deactivated successfully. Jan 17 12:10:27.531997 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:10:27.532763 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:10:27.534093 systemd-logind[1445]: Removed session 8. Jan 17 12:10:27.924661 containerd[1463]: time="2025-01-17T12:10:27.924579247Z" level=info msg="shim disconnected" id=d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3 namespace=k8s.io Jan 17 12:10:27.924661 containerd[1463]: time="2025-01-17T12:10:27.924655039Z" level=warning msg="cleaning up after shim disconnected" id=d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3 namespace=k8s.io Jan 17 12:10:27.924661 containerd[1463]: time="2025-01-17T12:10:27.924666058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:27.993008 kubelet[2585]: E0117 12:10:27.992971 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:27.995465 containerd[1463]: time="2025-01-17T12:10:27.995407900Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:10:28.013835 kubelet[2585]: I0117 12:10:28.013769 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j66r5" podStartSLOduration=14.013747597 podStartE2EDuration="14.013747597s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:16.044091886 +0000 UTC m=+16.236027426" watchObservedRunningTime="2025-01-17 12:10:28.013747597 +0000 UTC m=+28.205683127" Jan 17 12:10:28.014379 containerd[1463]: time="2025-01-17T12:10:28.014327985Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\"" Jan 17 12:10:28.015027 containerd[1463]: time="2025-01-17T12:10:28.014985017Z" level=info msg="StartContainer for \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\"" Jan 17 12:10:28.042614 systemd[1]: Started cri-containerd-076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0.scope - libcontainer container 076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0. Jan 17 12:10:28.070268 containerd[1463]: time="2025-01-17T12:10:28.070220624Z" level=info msg="StartContainer for \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\" returns successfully" Jan 17 12:10:28.082956 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:10:28.083291 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:10:28.083375 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:10:28.089805 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:10:28.090069 systemd[1]: cri-containerd-076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0.scope: Deactivated successfully. Jan 17 12:10:28.114060 containerd[1463]: time="2025-01-17T12:10:28.113992599Z" level=info msg="shim disconnected" id=076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0 namespace=k8s.io Jan 17 12:10:28.114060 containerd[1463]: time="2025-01-17T12:10:28.114052264Z" level=warning msg="cleaning up after shim disconnected" id=076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0 namespace=k8s.io Jan 17 12:10:28.114060 containerd[1463]: time="2025-01-17T12:10:28.114065217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:28.115712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:10:28.392347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3-rootfs.mount: Deactivated successfully. Jan 17 12:10:28.996394 kubelet[2585]: E0117 12:10:28.996361 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:28.998107 containerd[1463]: time="2025-01-17T12:10:28.998073039Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:10:29.078781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027138979.mount: Deactivated successfully. Jan 17 12:10:29.089016 containerd[1463]: time="2025-01-17T12:10:29.088953201Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\"" Jan 17 12:10:29.089598 containerd[1463]: time="2025-01-17T12:10:29.089563507Z" level=info msg="StartContainer for \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\"" Jan 17 12:10:29.121671 systemd[1]: Started cri-containerd-6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43.scope - libcontainer container 6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43. Jan 17 12:10:29.152951 containerd[1463]: time="2025-01-17T12:10:29.152901026Z" level=info msg="StartContainer for \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\" returns successfully" Jan 17 12:10:29.153275 systemd[1]: cri-containerd-6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43.scope: Deactivated successfully. Jan 17 12:10:29.181223 containerd[1463]: time="2025-01-17T12:10:29.181144090Z" level=info msg="shim disconnected" id=6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43 namespace=k8s.io Jan 17 12:10:29.181223 containerd[1463]: time="2025-01-17T12:10:29.181217761Z" level=warning msg="cleaning up after shim disconnected" id=6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43 namespace=k8s.io Jan 17 12:10:29.181223 containerd[1463]: time="2025-01-17T12:10:29.181226957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:29.392536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43-rootfs.mount: Deactivated successfully. Jan 17 12:10:30.000085 kubelet[2585]: E0117 12:10:30.000053 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:30.003118 containerd[1463]: time="2025-01-17T12:10:30.002500728Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:10:30.113333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417157182.mount: Deactivated successfully. Jan 17 12:10:30.227063 containerd[1463]: time="2025-01-17T12:10:30.226993904Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\"" Jan 17 12:10:30.227605 containerd[1463]: time="2025-01-17T12:10:30.227582906Z" level=info msg="StartContainer for \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\"" Jan 17 12:10:30.256891 systemd[1]: Started cri-containerd-65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8.scope - libcontainer container 65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8. Jan 17 12:10:30.281984 systemd[1]: cri-containerd-65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8.scope: Deactivated successfully. Jan 17 12:10:30.350408 containerd[1463]: time="2025-01-17T12:10:30.350315808Z" level=info msg="StartContainer for \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\" returns successfully" Jan 17 12:10:30.392215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8-rootfs.mount: Deactivated successfully. Jan 17 12:10:30.506964 containerd[1463]: time="2025-01-17T12:10:30.506883279Z" level=info msg="shim disconnected" id=65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8 namespace=k8s.io Jan 17 12:10:30.506964 containerd[1463]: time="2025-01-17T12:10:30.506941331Z" level=warning msg="cleaning up after shim disconnected" id=65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8 namespace=k8s.io Jan 17 12:10:30.506964 containerd[1463]: time="2025-01-17T12:10:30.506954454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:31.004415 kubelet[2585]: E0117 12:10:31.004363 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:31.006512 containerd[1463]: time="2025-01-17T12:10:31.006472322Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:10:31.058801 containerd[1463]: time="2025-01-17T12:10:31.058742618Z" level=info msg="CreateContainer within sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\"" Jan 17 12:10:31.059498 containerd[1463]: time="2025-01-17T12:10:31.059430887Z" level=info msg="StartContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\"" Jan 17 12:10:31.090612 systemd[1]: Started cri-containerd-e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad.scope - libcontainer container e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad. Jan 17 12:10:31.121503 containerd[1463]: time="2025-01-17T12:10:31.121318261Z" level=info msg="StartContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" returns successfully" Jan 17 12:10:31.285896 kubelet[2585]: I0117 12:10:31.285531 2585 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:10:31.306288 kubelet[2585]: I0117 12:10:31.306231 2585 topology_manager.go:215] "Topology Admit Handler" podUID="90e7b9d6-2f64-4754-8ade-a6b4d19d86f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wdbrt" Jan 17 12:10:31.307547 kubelet[2585]: I0117 12:10:31.307505 2585 topology_manager.go:215] "Topology Admit Handler" podUID="d21ed6b7-da72-48bc-9107-d512088bf54c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ckbzg" Jan 17 12:10:31.316792 systemd[1]: Created slice kubepods-burstable-pod90e7b9d6_2f64_4754_8ade_a6b4d19d86f4.slice - libcontainer container kubepods-burstable-pod90e7b9d6_2f64_4754_8ade_a6b4d19d86f4.slice. Jan 17 12:10:31.322900 systemd[1]: Created slice kubepods-burstable-podd21ed6b7_da72_48bc_9107_d512088bf54c.slice - libcontainer container kubepods-burstable-podd21ed6b7_da72_48bc_9107_d512088bf54c.slice. Jan 17 12:10:31.384624 kubelet[2585]: I0117 12:10:31.383534 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90e7b9d6-2f64-4754-8ade-a6b4d19d86f4-config-volume\") pod \"coredns-7db6d8ff4d-wdbrt\" (UID: \"90e7b9d6-2f64-4754-8ade-a6b4d19d86f4\") " pod="kube-system/coredns-7db6d8ff4d-wdbrt" Jan 17 12:10:31.384624 kubelet[2585]: I0117 12:10:31.383601 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb977\" (UniqueName: \"kubernetes.io/projected/90e7b9d6-2f64-4754-8ade-a6b4d19d86f4-kube-api-access-lb977\") pod \"coredns-7db6d8ff4d-wdbrt\" (UID: \"90e7b9d6-2f64-4754-8ade-a6b4d19d86f4\") " pod="kube-system/coredns-7db6d8ff4d-wdbrt" Jan 17 12:10:31.384624 kubelet[2585]: I0117 12:10:31.383620 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d21ed6b7-da72-48bc-9107-d512088bf54c-config-volume\") pod \"coredns-7db6d8ff4d-ckbzg\" (UID: \"d21ed6b7-da72-48bc-9107-d512088bf54c\") " pod="kube-system/coredns-7db6d8ff4d-ckbzg" Jan 17 12:10:31.384624 kubelet[2585]: I0117 12:10:31.383642 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxrtv\" (UniqueName: \"kubernetes.io/projected/d21ed6b7-da72-48bc-9107-d512088bf54c-kube-api-access-cxrtv\") pod \"coredns-7db6d8ff4d-ckbzg\" (UID: \"d21ed6b7-da72-48bc-9107-d512088bf54c\") " pod="kube-system/coredns-7db6d8ff4d-ckbzg" Jan 17 12:10:31.620353 kubelet[2585]: E0117 12:10:31.620052 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:31.620819 containerd[1463]: time="2025-01-17T12:10:31.620553851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdbrt,Uid:90e7b9d6-2f64-4754-8ade-a6b4d19d86f4,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:31.625877 kubelet[2585]: E0117 12:10:31.625658 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:31.626012 containerd[1463]: time="2025-01-17T12:10:31.625981058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckbzg,Uid:d21ed6b7-da72-48bc-9107-d512088bf54c,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:32.009234 kubelet[2585]: E0117 12:10:32.009206 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:32.536799 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:32770.service - OpenSSH per-connection server daemon (10.0.0.1:32770). Jan 17 12:10:32.573299 sshd[3387]: Accepted publickey for core from 10.0.0.1 port 32770 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:32.574788 sshd[3387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:32.579026 systemd-logind[1445]: New session 9 of user core. Jan 17 12:10:32.589609 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:10:32.792730 sshd[3387]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:32.797323 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:32770.service: Deactivated successfully. Jan 17 12:10:32.799460 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:10:32.800239 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:10:32.801375 systemd-logind[1445]: Removed session 9. Jan 17 12:10:33.010724 kubelet[2585]: E0117 12:10:33.010676 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:34.013234 kubelet[2585]: E0117 12:10:34.013198 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:34.178406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388020774.mount: Deactivated successfully. Jan 17 12:10:35.014851 kubelet[2585]: E0117 12:10:35.014798 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:36.106069 containerd[1463]: time="2025-01-17T12:10:36.106005262Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:36.106720 containerd[1463]: time="2025-01-17T12:10:36.106660963Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907185" Jan 17 12:10:36.107749 containerd[1463]: time="2025-01-17T12:10:36.107717637Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:36.109000 containerd[1463]: time="2025-01-17T12:10:36.108966691Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.733393622s" Jan 17 12:10:36.109054 containerd[1463]: time="2025-01-17T12:10:36.108998968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:10:36.111356 containerd[1463]: time="2025-01-17T12:10:36.111329819Z" level=info msg="CreateContainer within sandbox \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:10:36.124019 containerd[1463]: time="2025-01-17T12:10:36.123974322Z" level=info msg="CreateContainer within sandbox \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\"" Jan 17 12:10:36.124504 containerd[1463]: time="2025-01-17T12:10:36.124461085Z" level=info msg="StartContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\"" Jan 17 12:10:36.155585 systemd[1]: Started cri-containerd-ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b.scope - libcontainer container ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b. Jan 17 12:10:36.409551 containerd[1463]: time="2025-01-17T12:10:36.409391023Z" level=info msg="StartContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" returns successfully" Jan 17 12:10:37.019609 kubelet[2585]: E0117 12:10:37.019580 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:37.028972 kubelet[2585]: I0117 12:10:37.028680 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zlk99" podStartSLOduration=1.978017176 podStartE2EDuration="23.028658081s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="2025-01-17 12:10:15.059095029 +0000 UTC m=+15.251030559" lastFinishedPulling="2025-01-17 12:10:36.109735934 +0000 UTC m=+36.301671464" observedRunningTime="2025-01-17 12:10:37.028632146 +0000 UTC m=+37.220567676" watchObservedRunningTime="2025-01-17 12:10:37.028658081 +0000 UTC m=+37.220593611" Jan 17 12:10:37.028972 kubelet[2585]: I0117 12:10:37.028959 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cwltp" podStartSLOduration=10.689723038 podStartE2EDuration="23.028954117s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="2025-01-17 12:10:15.036059095 +0000 UTC m=+15.227994625" lastFinishedPulling="2025-01-17 12:10:27.375290174 +0000 UTC m=+27.567225704" observedRunningTime="2025-01-17 12:10:32.178343923 +0000 UTC m=+32.370279463" watchObservedRunningTime="2025-01-17 12:10:37.028954117 +0000 UTC m=+37.220889648" Jan 17 12:10:37.807605 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:53892.service - OpenSSH per-connection server daemon (10.0.0.1:53892). Jan 17 12:10:37.843227 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 53892 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:37.845269 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:37.850425 systemd-logind[1445]: New session 10 of user core. Jan 17 12:10:37.856595 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:10:37.967969 sshd[3455]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:37.971891 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:53892.service: Deactivated successfully. Jan 17 12:10:37.973884 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:10:37.974646 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:10:37.975589 systemd-logind[1445]: Removed session 10. Jan 17 12:10:38.020857 kubelet[2585]: E0117 12:10:38.020828 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:40.092386 systemd-networkd[1385]: cilium_host: Link UP Jan 17 12:10:40.093163 systemd-networkd[1385]: cilium_net: Link UP Jan 17 12:10:40.093406 systemd-networkd[1385]: cilium_net: Gained carrier Jan 17 12:10:40.093646 systemd-networkd[1385]: cilium_host: Gained carrier Jan 17 12:10:40.203528 systemd-networkd[1385]: cilium_vxlan: Link UP Jan 17 12:10:40.203538 systemd-networkd[1385]: cilium_vxlan: Gained carrier Jan 17 12:10:40.309587 systemd-networkd[1385]: cilium_net: Gained IPv6LL Jan 17 12:10:40.416490 kernel: NET: Registered PF_ALG protocol family Jan 17 12:10:41.106070 systemd-networkd[1385]: lxc_health: Link UP Jan 17 12:10:41.116650 systemd-networkd[1385]: lxc_health: Gained carrier Jan 17 12:10:41.119593 systemd-networkd[1385]: cilium_host: Gained IPv6LL Jan 17 12:10:41.694147 systemd-networkd[1385]: lxc2ded0daa90e8: Link UP Jan 17 12:10:41.709585 kernel: eth0: renamed from tmp81768 Jan 17 12:10:41.715867 systemd-networkd[1385]: lxc2ded0daa90e8: Gained carrier Jan 17 12:10:41.716401 systemd-networkd[1385]: lxc6c6fc2f38f8c: Link UP Jan 17 12:10:41.724473 kernel: eth0: renamed from tmp9a64c Jan 17 12:10:41.733428 systemd-networkd[1385]: lxc6c6fc2f38f8c: Gained carrier Jan 17 12:10:42.013628 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Jan 17 12:10:42.837711 kubelet[2585]: E0117 12:10:42.837672 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:42.987095 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Jan 17 12:10:43.028862 sshd[3844]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:43.030684 kubelet[2585]: E0117 12:10:43.030652 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:43.031943 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:43.036455 systemd-logind[1445]: New session 11 of user core. Jan 17 12:10:43.044618 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:10:43.103599 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jan 17 12:10:43.173917 sshd[3844]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:43.183636 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:53900.service: Deactivated successfully. Jan 17 12:10:43.185622 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:10:43.187303 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:10:43.192771 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:53914.service - OpenSSH per-connection server daemon (10.0.0.1:53914). Jan 17 12:10:43.194075 systemd-logind[1445]: Removed session 11. Jan 17 12:10:43.225326 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 53914 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:43.227053 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:43.231017 systemd-logind[1445]: New session 12 of user core. Jan 17 12:10:43.238667 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:10:43.293634 systemd-networkd[1385]: lxc2ded0daa90e8: Gained IPv6LL Jan 17 12:10:43.409506 sshd[3859]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:43.422345 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:53914.service: Deactivated successfully. Jan 17 12:10:43.424989 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:10:43.428728 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:10:43.435236 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:53920.service - OpenSSH per-connection server daemon (10.0.0.1:53920). Jan 17 12:10:43.437406 systemd-logind[1445]: Removed session 12. Jan 17 12:10:43.467894 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 53920 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:43.469884 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:43.474798 systemd-logind[1445]: New session 13 of user core. Jan 17 12:10:43.480674 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:10:43.606969 sshd[3871]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:43.611680 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:53920.service: Deactivated successfully. Jan 17 12:10:43.614716 systemd-networkd[1385]: lxc6c6fc2f38f8c: Gained IPv6LL Jan 17 12:10:43.615329 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:10:43.616064 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:10:43.617201 systemd-logind[1445]: Removed session 13. Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567365013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567431211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567472145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567432193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567570651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:45.567628 containerd[1463]: time="2025-01-17T12:10:45.567590858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:45.568288 containerd[1463]: time="2025-01-17T12:10:45.567632031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:45.568288 containerd[1463]: time="2025-01-17T12:10:45.567682622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:45.593638 systemd[1]: Started cri-containerd-817688fd844b92cd550e01a27b420d63f74a46ac8ed93b2f45eb63df92f360ea.scope - libcontainer container 817688fd844b92cd550e01a27b420d63f74a46ac8ed93b2f45eb63df92f360ea. Jan 17 12:10:45.599546 systemd[1]: Started cri-containerd-9a64c3c1a577e783126f3b9db0184de76faa4aad1c0fa56cd1f7f2178836791f.scope - libcontainer container 9a64c3c1a577e783126f3b9db0184de76faa4aad1c0fa56cd1f7f2178836791f. Jan 17 12:10:45.611141 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:45.613483 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:10:45.638819 containerd[1463]: time="2025-01-17T12:10:45.638668511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wdbrt,Uid:90e7b9d6-2f64-4754-8ade-a6b4d19d86f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"817688fd844b92cd550e01a27b420d63f74a46ac8ed93b2f45eb63df92f360ea\"" Jan 17 12:10:45.640296 kubelet[2585]: E0117 12:10:45.640272 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:45.644236 containerd[1463]: time="2025-01-17T12:10:45.642727195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ckbzg,Uid:d21ed6b7-da72-48bc-9107-d512088bf54c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a64c3c1a577e783126f3b9db0184de76faa4aad1c0fa56cd1f7f2178836791f\"" Jan 17 12:10:45.644236 containerd[1463]: time="2025-01-17T12:10:45.643271712Z" level=info msg="CreateContainer within sandbox \"817688fd844b92cd550e01a27b420d63f74a46ac8ed93b2f45eb63df92f360ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:45.644462 kubelet[2585]: E0117 12:10:45.643756 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:45.646566 containerd[1463]: time="2025-01-17T12:10:45.646387655Z" level=info msg="CreateContainer within sandbox \"9a64c3c1a577e783126f3b9db0184de76faa4aad1c0fa56cd1f7f2178836791f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:45.716525 containerd[1463]: time="2025-01-17T12:10:45.716417117Z" level=info msg="CreateContainer within sandbox \"9a64c3c1a577e783126f3b9db0184de76faa4aad1c0fa56cd1f7f2178836791f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"495227c5b15e4a0a1a679a13867566ab32885a3d4bad019f3ef63948481346ef\"" Jan 17 12:10:45.717274 containerd[1463]: time="2025-01-17T12:10:45.717043371Z" level=info msg="StartContainer for \"495227c5b15e4a0a1a679a13867566ab32885a3d4bad019f3ef63948481346ef\"" Jan 17 12:10:45.721873 containerd[1463]: time="2025-01-17T12:10:45.721810857Z" level=info msg="CreateContainer within sandbox \"817688fd844b92cd550e01a27b420d63f74a46ac8ed93b2f45eb63df92f360ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc65055ff6f9ec58849e16dce95b2caa3cd5f46f7c3ace6594193dbad50371d9\"" Jan 17 12:10:45.722639 containerd[1463]: time="2025-01-17T12:10:45.722518517Z" level=info msg="StartContainer for \"dc65055ff6f9ec58849e16dce95b2caa3cd5f46f7c3ace6594193dbad50371d9\"" Jan 17 12:10:45.745862 systemd[1]: Started cri-containerd-495227c5b15e4a0a1a679a13867566ab32885a3d4bad019f3ef63948481346ef.scope - libcontainer container 495227c5b15e4a0a1a679a13867566ab32885a3d4bad019f3ef63948481346ef. Jan 17 12:10:45.749285 systemd[1]: Started cri-containerd-dc65055ff6f9ec58849e16dce95b2caa3cd5f46f7c3ace6594193dbad50371d9.scope - libcontainer container dc65055ff6f9ec58849e16dce95b2caa3cd5f46f7c3ace6594193dbad50371d9. Jan 17 12:10:45.787487 containerd[1463]: time="2025-01-17T12:10:45.787409528Z" level=info msg="StartContainer for \"dc65055ff6f9ec58849e16dce95b2caa3cd5f46f7c3ace6594193dbad50371d9\" returns successfully" Jan 17 12:10:45.787626 containerd[1463]: time="2025-01-17T12:10:45.787467201Z" level=info msg="StartContainer for \"495227c5b15e4a0a1a679a13867566ab32885a3d4bad019f3ef63948481346ef\" returns successfully" Jan 17 12:10:46.037838 kubelet[2585]: E0117 12:10:46.037490 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:46.039456 kubelet[2585]: E0117 12:10:46.039417 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:46.048948 kubelet[2585]: I0117 12:10:46.048716 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ckbzg" podStartSLOduration=32.04869809 podStartE2EDuration="32.04869809s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:46.04773 +0000 UTC m=+46.239665550" watchObservedRunningTime="2025-01-17 12:10:46.04869809 +0000 UTC m=+46.240633620" Jan 17 12:10:46.068814 kubelet[2585]: I0117 12:10:46.068737 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wdbrt" podStartSLOduration=32.068713102 podStartE2EDuration="32.068713102s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:46.057155125 +0000 UTC m=+46.249090665" watchObservedRunningTime="2025-01-17 12:10:46.068713102 +0000 UTC m=+46.260648632" Jan 17 12:10:46.573922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221957222.mount: Deactivated successfully. Jan 17 12:10:47.041579 kubelet[2585]: E0117 12:10:47.041545 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:47.041579 kubelet[2585]: E0117 12:10:47.041556 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:48.043488 kubelet[2585]: E0117 12:10:48.043433 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:48.043922 kubelet[2585]: E0117 12:10:48.043512 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:10:48.625515 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:34030.service - OpenSSH per-connection server daemon (10.0.0.1:34030). Jan 17 12:10:48.662041 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 34030 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:48.663982 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:48.668492 systemd-logind[1445]: New session 14 of user core. Jan 17 12:10:48.676625 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:10:48.829887 sshd[4062]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:48.835515 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:34030.service: Deactivated successfully. Jan 17 12:10:48.839010 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:10:48.839890 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:10:48.841029 systemd-logind[1445]: Removed session 14. Jan 17 12:10:53.843043 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:34046.service - OpenSSH per-connection server daemon (10.0.0.1:34046). Jan 17 12:10:53.877312 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 34046 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:53.879719 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:53.885572 systemd-logind[1445]: New session 15 of user core. Jan 17 12:10:53.901641 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:10:54.014089 sshd[4080]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:54.018401 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:34046.service: Deactivated successfully. Jan 17 12:10:54.020747 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:10:54.021525 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:10:54.022441 systemd-logind[1445]: Removed session 15. Jan 17 12:10:59.027257 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:59978.service - OpenSSH per-connection server daemon (10.0.0.1:59978). Jan 17 12:10:59.062174 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:59.063978 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:59.068575 systemd-logind[1445]: New session 16 of user core. Jan 17 12:10:59.079613 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:10:59.197953 sshd[4095]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:59.209381 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:59978.service: Deactivated successfully. Jan 17 12:10:59.211338 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:10:59.212836 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:10:59.214320 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:59994.service - OpenSSH per-connection server daemon (10.0.0.1:59994). Jan 17 12:10:59.215022 systemd-logind[1445]: Removed session 16. Jan 17 12:10:59.246900 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 59994 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:59.248544 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:59.252686 systemd-logind[1445]: New session 17 of user core. Jan 17 12:10:59.260594 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:10:59.492680 sshd[4109]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:59.508258 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:59994.service: Deactivated successfully. Jan 17 12:10:59.510480 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:10:59.512114 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:10:59.518768 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:60010.service - OpenSSH per-connection server daemon (10.0.0.1:60010). Jan 17 12:10:59.519788 systemd-logind[1445]: Removed session 17. Jan 17 12:10:59.554002 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 60010 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:10:59.555926 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:10:59.560333 systemd-logind[1445]: New session 18 of user core. Jan 17 12:10:59.570577 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:11:01.034814 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:01.042457 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:60010.service: Deactivated successfully. Jan 17 12:11:01.044711 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:11:01.046399 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:11:01.056023 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:60014.service - OpenSSH per-connection server daemon (10.0.0.1:60014). Jan 17 12:11:01.057733 systemd-logind[1445]: Removed session 18. Jan 17 12:11:01.087413 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 60014 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:01.089089 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:01.092787 systemd-logind[1445]: New session 19 of user core. Jan 17 12:11:01.107576 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:11:01.376387 sshd[4158]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:01.387067 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:60014.service: Deactivated successfully. Jan 17 12:11:01.390777 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:11:01.394423 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:11:01.405777 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:60022.service - OpenSSH per-connection server daemon (10.0.0.1:60022). Jan 17 12:11:01.407377 systemd-logind[1445]: Removed session 19. Jan 17 12:11:01.441805 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 60022 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:01.443380 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:01.446952 systemd-logind[1445]: New session 20 of user core. Jan 17 12:11:01.456564 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:11:01.561440 sshd[4170]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:01.565560 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:60022.service: Deactivated successfully. Jan 17 12:11:01.567731 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:11:01.568375 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:11:01.569233 systemd-logind[1445]: Removed session 20. Jan 17 12:11:06.574044 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:60034.service - OpenSSH per-connection server daemon (10.0.0.1:60034). Jan 17 12:11:06.610244 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 60034 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:06.612110 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:06.616420 systemd-logind[1445]: New session 21 of user core. Jan 17 12:11:06.626596 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:11:06.742203 sshd[4184]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:06.745970 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:60034.service: Deactivated successfully. Jan 17 12:11:06.748019 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:11:06.748772 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:11:06.749709 systemd-logind[1445]: Removed session 21. Jan 17 12:11:11.756053 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:53366.service - OpenSSH per-connection server daemon (10.0.0.1:53366). Jan 17 12:11:11.792193 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 53366 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:11.793949 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:11.798700 systemd-logind[1445]: New session 22 of user core. Jan 17 12:11:11.806636 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:11:11.921634 sshd[4201]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:11.926032 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:53366.service: Deactivated successfully. Jan 17 12:11:11.928943 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:11:11.929719 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:11:11.930864 systemd-logind[1445]: Removed session 22. Jan 17 12:11:16.937485 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Jan 17 12:11:16.970013 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:16.971665 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:16.975635 systemd-logind[1445]: New session 23 of user core. Jan 17 12:11:16.984601 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:11:17.088851 sshd[4217]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:17.093116 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:53376.service: Deactivated successfully. Jan 17 12:11:17.095785 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:11:17.096462 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:11:17.097367 systemd-logind[1445]: Removed session 23. Jan 17 12:11:18.895999 kubelet[2585]: E0117 12:11:18.895949 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:22.101017 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Jan 17 12:11:22.133465 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:22.135019 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:22.139461 systemd-logind[1445]: New session 24 of user core. Jan 17 12:11:22.146601 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:11:22.247251 sshd[4232]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:22.263508 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:40642.service: Deactivated successfully. Jan 17 12:11:22.265319 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:11:22.266998 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:11:22.273832 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:40654.service - OpenSSH per-connection server daemon (10.0.0.1:40654). Jan 17 12:11:22.274903 systemd-logind[1445]: Removed session 24. Jan 17 12:11:22.302799 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:22.304518 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:22.308693 systemd-logind[1445]: New session 25 of user core. Jan 17 12:11:22.322610 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:11:23.857734 containerd[1463]: time="2025-01-17T12:11:23.857598078Z" level=info msg="StopContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" with timeout 30 (s)" Jan 17 12:11:23.858883 containerd[1463]: time="2025-01-17T12:11:23.858103890Z" level=info msg="Stop container \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" with signal terminated" Jan 17 12:11:23.874062 systemd[1]: cri-containerd-ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b.scope: Deactivated successfully. Jan 17 12:11:23.891100 containerd[1463]: time="2025-01-17T12:11:23.891032896Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:11:23.898212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b-rootfs.mount: Deactivated successfully. Jan 17 12:11:23.902406 containerd[1463]: time="2025-01-17T12:11:23.902339631Z" level=info msg="StopContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" with timeout 2 (s)" Jan 17 12:11:23.902843 containerd[1463]: time="2025-01-17T12:11:23.902789019Z" level=info msg="Stop container \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" with signal terminated" Jan 17 12:11:23.911056 systemd-networkd[1385]: lxc_health: Link DOWN Jan 17 12:11:23.911065 systemd-networkd[1385]: lxc_health: Lost carrier Jan 17 12:11:23.913938 containerd[1463]: time="2025-01-17T12:11:23.911536169Z" level=info msg="shim disconnected" id=ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b namespace=k8s.io Jan 17 12:11:23.913938 containerd[1463]: time="2025-01-17T12:11:23.911619011Z" level=warning msg="cleaning up after shim disconnected" id=ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b namespace=k8s.io Jan 17 12:11:23.913938 containerd[1463]: time="2025-01-17T12:11:23.911632697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:23.933501 systemd[1]: cri-containerd-e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad.scope: Deactivated successfully. Jan 17 12:11:23.933970 systemd[1]: cri-containerd-e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad.scope: Consumed 7.504s CPU time. Jan 17 12:11:23.941555 containerd[1463]: time="2025-01-17T12:11:23.941521645Z" level=info msg="StopContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" returns successfully" Jan 17 12:11:23.945632 containerd[1463]: time="2025-01-17T12:11:23.945586802Z" level=info msg="StopPodSandbox for \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\"" Jan 17 12:11:23.945632 containerd[1463]: time="2025-01-17T12:11:23.945635681Z" level=info msg="Container to stop \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.947663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6-shm.mount: Deactivated successfully. Jan 17 12:11:23.954618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad-rootfs.mount: Deactivated successfully. Jan 17 12:11:23.955303 systemd[1]: cri-containerd-8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6.scope: Deactivated successfully. Jan 17 12:11:23.968021 containerd[1463]: time="2025-01-17T12:11:23.967953125Z" level=info msg="shim disconnected" id=e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad namespace=k8s.io Jan 17 12:11:23.968021 containerd[1463]: time="2025-01-17T12:11:23.968007104Z" level=warning msg="cleaning up after shim disconnected" id=e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad namespace=k8s.io Jan 17 12:11:23.968021 containerd[1463]: time="2025-01-17T12:11:23.968015320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:23.978417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6-rootfs.mount: Deactivated successfully. Jan 17 12:11:23.980025 containerd[1463]: time="2025-01-17T12:11:23.979952385Z" level=info msg="shim disconnected" id=8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6 namespace=k8s.io Jan 17 12:11:23.980025 containerd[1463]: time="2025-01-17T12:11:23.980012736Z" level=warning msg="cleaning up after shim disconnected" id=8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6 namespace=k8s.io Jan 17 12:11:23.980025 containerd[1463]: time="2025-01-17T12:11:23.980020971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:23.982037 containerd[1463]: time="2025-01-17T12:11:23.981942821Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:11:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:11:23.986333 containerd[1463]: time="2025-01-17T12:11:23.986282894Z" level=info msg="StopContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" returns successfully" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986838247Z" level=info msg="StopPodSandbox for \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\"" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986874113Z" level=info msg="Container to stop \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986884963Z" level=info msg="Container to stop \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986894931Z" level=info msg="Container to stop \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986904759Z" level=info msg="Container to stop \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.986934 containerd[1463]: time="2025-01-17T12:11:23.986913926Z" level=info msg="Container to stop \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:11:23.993999 systemd[1]: cri-containerd-b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4.scope: Deactivated successfully. Jan 17 12:11:24.007425 containerd[1463]: time="2025-01-17T12:11:24.007364004Z" level=info msg="TearDown network for sandbox \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\" successfully" Jan 17 12:11:24.007425 containerd[1463]: time="2025-01-17T12:11:24.007425467Z" level=info msg="StopPodSandbox for \"8fc81df7f34266924e81b6446c40fc8a1199a21999ad7399da78b5a1470bdeb6\" returns successfully" Jan 17 12:11:24.027094 containerd[1463]: time="2025-01-17T12:11:24.026842721Z" level=info msg="shim disconnected" id=b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4 namespace=k8s.io Jan 17 12:11:24.027094 containerd[1463]: time="2025-01-17T12:11:24.026906869Z" level=warning msg="cleaning up after shim disconnected" id=b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4 namespace=k8s.io Jan 17 12:11:24.027094 containerd[1463]: time="2025-01-17T12:11:24.026914974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:24.045303 containerd[1463]: time="2025-01-17T12:11:24.045231390Z" level=info msg="TearDown network for sandbox \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" successfully" Jan 17 12:11:24.045303 containerd[1463]: time="2025-01-17T12:11:24.045286171Z" level=info msg="StopPodSandbox for \"b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4\" returns successfully" Jan 17 12:11:24.119134 kubelet[2585]: I0117 12:11:24.118970 2585 scope.go:117] "RemoveContainer" containerID="ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b" Jan 17 12:11:24.121512 containerd[1463]: time="2025-01-17T12:11:24.121287134Z" level=info msg="RemoveContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\"" Jan 17 12:11:24.126265 containerd[1463]: time="2025-01-17T12:11:24.126226374Z" level=info msg="RemoveContainer for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" returns successfully" Jan 17 12:11:24.126635 kubelet[2585]: I0117 12:11:24.126603 2585 scope.go:117] "RemoveContainer" containerID="ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b" Jan 17 12:11:24.129844 containerd[1463]: time="2025-01-17T12:11:24.129799687Z" level=error msg="ContainerStatus for \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\": not found" Jan 17 12:11:24.137758 kubelet[2585]: E0117 12:11:24.137709 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\": not found" containerID="ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b" Jan 17 12:11:24.137958 kubelet[2585]: I0117 12:11:24.137752 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b"} err="failed to get container status \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac8fa7b9143c32d9641e215819dc16f5311c13702eaf6f642554d15d631b175b\": not found" Jan 17 12:11:24.137958 kubelet[2585]: I0117 12:11:24.137844 2585 scope.go:117] "RemoveContainer" containerID="e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad" Jan 17 12:11:24.139312 containerd[1463]: time="2025-01-17T12:11:24.139267580Z" level=info msg="RemoveContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\"" Jan 17 12:11:24.143387 containerd[1463]: time="2025-01-17T12:11:24.143326218Z" level=info msg="RemoveContainer for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" returns successfully" Jan 17 12:11:24.143589 kubelet[2585]: I0117 12:11:24.143560 2585 scope.go:117] "RemoveContainer" containerID="65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8" Jan 17 12:11:24.145029 containerd[1463]: time="2025-01-17T12:11:24.144983452Z" level=info msg="RemoveContainer for \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\"" Jan 17 12:11:24.148528 containerd[1463]: time="2025-01-17T12:11:24.148489592Z" level=info msg="RemoveContainer for \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\" returns successfully" Jan 17 12:11:24.148739 kubelet[2585]: I0117 12:11:24.148706 2585 scope.go:117] "RemoveContainer" containerID="6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43" Jan 17 12:11:24.149839 containerd[1463]: time="2025-01-17T12:11:24.149800747Z" level=info msg="RemoveContainer for \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\"" Jan 17 12:11:24.153394 containerd[1463]: time="2025-01-17T12:11:24.153344896Z" level=info msg="RemoveContainer for \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\" returns successfully" Jan 17 12:11:24.153564 kubelet[2585]: I0117 12:11:24.153539 2585 scope.go:117] "RemoveContainer" containerID="076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0" Jan 17 12:11:24.154672 containerd[1463]: time="2025-01-17T12:11:24.154636767Z" level=info msg="RemoveContainer for \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\"" Jan 17 12:11:24.158178 containerd[1463]: time="2025-01-17T12:11:24.158144409Z" level=info msg="RemoveContainer for \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\" returns successfully" Jan 17 12:11:24.158393 kubelet[2585]: I0117 12:11:24.158348 2585 scope.go:117] "RemoveContainer" containerID="d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3" Jan 17 12:11:24.159395 containerd[1463]: time="2025-01-17T12:11:24.159358336Z" level=info msg="RemoveContainer for \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\"" Jan 17 12:11:24.167918 containerd[1463]: time="2025-01-17T12:11:24.167874596Z" level=info msg="RemoveContainer for \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\" returns successfully" Jan 17 12:11:24.168087 kubelet[2585]: I0117 12:11:24.168057 2585 scope.go:117] "RemoveContainer" containerID="e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad" Jan 17 12:11:24.168345 containerd[1463]: time="2025-01-17T12:11:24.168303887Z" level=error msg="ContainerStatus for \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\": not found" Jan 17 12:11:24.168558 kubelet[2585]: E0117 12:11:24.168534 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\": not found" containerID="e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad" Jan 17 12:11:24.168614 kubelet[2585]: I0117 12:11:24.168564 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad"} err="failed to get container status \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3a208f494b059f7623c197728545069cddf3a91e724ba37d27f5929bf0177ad\": not found" Jan 17 12:11:24.168614 kubelet[2585]: I0117 12:11:24.168589 2585 scope.go:117] "RemoveContainer" containerID="65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8" Jan 17 12:11:24.168834 containerd[1463]: time="2025-01-17T12:11:24.168792427Z" level=error msg="ContainerStatus for \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\": not found" Jan 17 12:11:24.168993 kubelet[2585]: E0117 12:11:24.168947 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\": not found" containerID="65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8" Jan 17 12:11:24.169047 kubelet[2585]: I0117 12:11:24.168986 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8"} err="failed to get container status \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"65f413db69da38104888481bde7c4bd10dc4eb7dab97b3546c7451ee44a783b8\": not found" Jan 17 12:11:24.169047 kubelet[2585]: I0117 12:11:24.169014 2585 scope.go:117] "RemoveContainer" containerID="6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43" Jan 17 12:11:24.169288 containerd[1463]: time="2025-01-17T12:11:24.169248437Z" level=error msg="ContainerStatus for \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\": not found" Jan 17 12:11:24.169435 kubelet[2585]: E0117 12:11:24.169407 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\": not found" containerID="6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43" Jan 17 12:11:24.169435 kubelet[2585]: I0117 12:11:24.169429 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43"} err="failed to get container status \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\": rpc error: code = NotFound desc = an error occurred when try to find container \"6732e995263dd3c823ace9aa8e2f584ab812ab0a1d799e896d6637703b240b43\": not found" Jan 17 12:11:24.169546 kubelet[2585]: I0117 12:11:24.169462 2585 scope.go:117] "RemoveContainer" containerID="076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0" Jan 17 12:11:24.172460 containerd[1463]: time="2025-01-17T12:11:24.169926336Z" level=error msg="ContainerStatus for \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\": not found" Jan 17 12:11:24.172584 kubelet[2585]: E0117 12:11:24.172544 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\": not found" containerID="076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0" Jan 17 12:11:24.172584 kubelet[2585]: I0117 12:11:24.172572 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0"} err="failed to get container status \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"076bcf762c9c23f1803c5b2e53771d39b9010313c9cab8ee6992fdb89e8234c0\": not found" Jan 17 12:11:24.172584 kubelet[2585]: I0117 12:11:24.172588 2585 scope.go:117] "RemoveContainer" containerID="d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3" Jan 17 12:11:24.173040 containerd[1463]: time="2025-01-17T12:11:24.172817483Z" level=error msg="ContainerStatus for \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\": not found" Jan 17 12:11:24.173088 kubelet[2585]: E0117 12:11:24.172977 2585 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\": not found" containerID="d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3" Jan 17 12:11:24.173088 kubelet[2585]: I0117 12:11:24.173008 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3"} err="failed to get container status \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9196eca4b3478b6b75dfa366ecbe5e460e31e6019c53c5e1f965918812de8e3\": not found" Jan 17 12:11:24.184465 kubelet[2585]: I0117 12:11:24.184399 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-net\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184539 kubelet[2585]: I0117 12:11:24.184482 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-bpf-maps\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184539 kubelet[2585]: I0117 12:11:24.184514 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-run\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184618 kubelet[2585]: I0117 12:11:24.184531 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.184618 kubelet[2585]: I0117 12:11:24.184546 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c26vl\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-kube-api-access-c26vl\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184618 kubelet[2585]: I0117 12:11:24.184579 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f754e5db-1b09-4249-838a-1341e83f7508-cilium-config-path\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184618 kubelet[2585]: I0117 12:11:24.184598 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-lib-modules\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184618 kubelet[2585]: I0117 12:11:24.184603 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184621 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-etc-cni-netd\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184647 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2fjc\" (UniqueName: \"kubernetes.io/projected/170045f2-26b9-4d89-bcf9-a166bac32790-kube-api-access-h2fjc\") pod \"170045f2-26b9-4d89-bcf9-a166bac32790\" (UID: \"170045f2-26b9-4d89-bcf9-a166bac32790\") " Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184671 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cni-path\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184691 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-hostproc\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184712 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-kernel\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.184874 kubelet[2585]: I0117 12:11:24.184734 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-hubble-tls\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184757 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f754e5db-1b09-4249-838a-1341e83f7508-clustermesh-secrets\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184782 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/170045f2-26b9-4d89-bcf9-a166bac32790-cilium-config-path\") pod \"170045f2-26b9-4d89-bcf9-a166bac32790\" (UID: \"170045f2-26b9-4d89-bcf9-a166bac32790\") " Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184801 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-xtables-lock\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184839 2585 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-cgroup\") pod \"f754e5db-1b09-4249-838a-1341e83f7508\" (UID: \"f754e5db-1b09-4249-838a-1341e83f7508\") " Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184882 2585 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.185059 kubelet[2585]: I0117 12:11:24.184897 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.185243 kubelet[2585]: I0117 12:11:24.184605 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.185243 kubelet[2585]: I0117 12:11:24.184956 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.185243 kubelet[2585]: I0117 12:11:24.184973 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.185243 kubelet[2585]: I0117 12:11:24.184988 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.188636 kubelet[2585]: I0117 12:11:24.188514 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.188697 kubelet[2585]: I0117 12:11:24.188661 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-hostproc" (OuterVolumeSpecName: "hostproc") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.188737 kubelet[2585]: I0117 12:11:24.188716 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cni-path" (OuterVolumeSpecName: "cni-path") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.190295 kubelet[2585]: I0117 12:11:24.189571 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170045f2-26b9-4d89-bcf9-a166bac32790-kube-api-access-h2fjc" (OuterVolumeSpecName: "kube-api-access-h2fjc") pod "170045f2-26b9-4d89-bcf9-a166bac32790" (UID: "170045f2-26b9-4d89-bcf9-a166bac32790"). InnerVolumeSpecName "kube-api-access-h2fjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:11:24.190295 kubelet[2585]: I0117 12:11:24.189634 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:11:24.190295 kubelet[2585]: I0117 12:11:24.189662 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-kube-api-access-c26vl" (OuterVolumeSpecName: "kube-api-access-c26vl") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "kube-api-access-c26vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:11:24.190295 kubelet[2585]: I0117 12:11:24.189730 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f754e5db-1b09-4249-838a-1341e83f7508-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:11:24.191155 kubelet[2585]: I0117 12:11:24.191126 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:11:24.191638 kubelet[2585]: I0117 12:11:24.191599 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f754e5db-1b09-4249-838a-1341e83f7508-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f754e5db-1b09-4249-838a-1341e83f7508" (UID: "f754e5db-1b09-4249-838a-1341e83f7508"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:11:24.192136 kubelet[2585]: I0117 12:11:24.192098 2585 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170045f2-26b9-4d89-bcf9-a166bac32790-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "170045f2-26b9-4d89-bcf9-a166bac32790" (UID: "170045f2-26b9-4d89-bcf9-a166bac32790"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:11:24.285928 kubelet[2585]: I0117 12:11:24.285874 2585 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.285928 kubelet[2585]: I0117 12:11:24.285919 2585 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.285928 kubelet[2585]: I0117 12:11:24.285930 2585 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.285928 kubelet[2585]: I0117 12:11:24.285939 2585 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f754e5db-1b09-4249-838a-1341e83f7508-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285952 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/170045f2-26b9-4d89-bcf9-a166bac32790-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285963 2585 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285971 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285979 2585 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285987 2585 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.285995 2585 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h2fjc\" (UniqueName: \"kubernetes.io/projected/170045f2-26b9-4d89-bcf9-a166bac32790-kube-api-access-h2fjc\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.286004 2585 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c26vl\" (UniqueName: \"kubernetes.io/projected/f754e5db-1b09-4249-838a-1341e83f7508-kube-api-access-c26vl\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286108 kubelet[2585]: I0117 12:11:24.286012 2585 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f754e5db-1b09-4249-838a-1341e83f7508-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286287 kubelet[2585]: I0117 12:11:24.286021 2585 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.286287 kubelet[2585]: I0117 12:11:24.286028 2585 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f754e5db-1b09-4249-838a-1341e83f7508-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:11:24.425816 systemd[1]: Removed slice kubepods-besteffort-pod170045f2_26b9_4d89_bcf9_a166bac32790.slice - libcontainer container kubepods-besteffort-pod170045f2_26b9_4d89_bcf9_a166bac32790.slice. Jan 17 12:11:24.430273 systemd[1]: Removed slice kubepods-burstable-podf754e5db_1b09_4249_838a_1341e83f7508.slice - libcontainer container kubepods-burstable-podf754e5db_1b09_4249_838a_1341e83f7508.slice. Jan 17 12:11:24.430385 systemd[1]: kubepods-burstable-podf754e5db_1b09_4249_838a_1341e83f7508.slice: Consumed 7.613s CPU time. Jan 17 12:11:24.864630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4-rootfs.mount: Deactivated successfully. Jan 17 12:11:24.864743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b45303f7233397a1a7b5db32c4c0d8669765e0708a7d615d1e5c3c2bd79e81a4-shm.mount: Deactivated successfully. Jan 17 12:11:24.864826 systemd[1]: var-lib-kubelet-pods-170045f2\x2d26b9\x2d4d89\x2dbcf9\x2da166bac32790-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2fjc.mount: Deactivated successfully. Jan 17 12:11:24.864903 systemd[1]: var-lib-kubelet-pods-f754e5db\x2d1b09\x2d4249\x2d838a\x2d1341e83f7508-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc26vl.mount: Deactivated successfully. Jan 17 12:11:24.864988 systemd[1]: var-lib-kubelet-pods-f754e5db\x2d1b09\x2d4249\x2d838a\x2d1341e83f7508-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:11:24.865060 systemd[1]: var-lib-kubelet-pods-f754e5db\x2d1b09\x2d4249\x2d838a\x2d1341e83f7508-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:11:24.965739 kubelet[2585]: E0117 12:11:24.965696 2585 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:11:25.791307 sshd[4247]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:25.801699 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:40654.service: Deactivated successfully. Jan 17 12:11:25.803714 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:11:25.805214 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:11:25.811919 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). Jan 17 12:11:25.813249 systemd-logind[1445]: Removed session 25. Jan 17 12:11:25.844482 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:25.846198 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:25.851993 systemd-logind[1445]: New session 26 of user core. Jan 17 12:11:25.865629 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:11:25.896804 kubelet[2585]: E0117 12:11:25.896762 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:25.898663 kubelet[2585]: I0117 12:11:25.898625 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170045f2-26b9-4d89-bcf9-a166bac32790" path="/var/lib/kubelet/pods/170045f2-26b9-4d89-bcf9-a166bac32790/volumes" Jan 17 12:11:25.899217 kubelet[2585]: I0117 12:11:25.899199 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f754e5db-1b09-4249-838a-1341e83f7508" path="/var/lib/kubelet/pods/f754e5db-1b09-4249-838a-1341e83f7508/volumes" Jan 17 12:11:26.537080 sshd[4414]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:26.547528 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:40666.service: Deactivated successfully. Jan 17 12:11:26.549961 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:11:26.553420 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:11:26.563756 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:40680.service - OpenSSH per-connection server daemon (10.0.0.1:40680). Jan 17 12:11:26.566418 systemd-logind[1445]: Removed session 26. Jan 17 12:11:26.576991 kubelet[2585]: I0117 12:11:26.576933 2585 topology_manager.go:215] "Topology Admit Handler" podUID="a925a8d7-59d5-4337-b15a-2da0641ae167" podNamespace="kube-system" podName="cilium-gdk5q" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577016 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="mount-cgroup" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577031 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="clean-cilium-state" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577039 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="cilium-agent" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577046 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="170045f2-26b9-4d89-bcf9-a166bac32790" containerName="cilium-operator" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577053 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="apply-sysctl-overwrites" Jan 17 12:11:26.577170 kubelet[2585]: E0117 12:11:26.577060 2585 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="mount-bpf-fs" Jan 17 12:11:26.577170 kubelet[2585]: I0117 12:11:26.577091 2585 memory_manager.go:354] "RemoveStaleState removing state" podUID="f754e5db-1b09-4249-838a-1341e83f7508" containerName="cilium-agent" Jan 17 12:11:26.577170 kubelet[2585]: I0117 12:11:26.577102 2585 memory_manager.go:354] "RemoveStaleState removing state" podUID="170045f2-26b9-4d89-bcf9-a166bac32790" containerName="cilium-operator" Jan 17 12:11:26.597155 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:26.595663 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:26.606512 systemd-logind[1445]: New session 27 of user core. Jan 17 12:11:26.613892 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:11:26.616081 systemd[1]: Created slice kubepods-burstable-poda925a8d7_59d5_4337_b15a_2da0641ae167.slice - libcontainer container kubepods-burstable-poda925a8d7_59d5_4337_b15a_2da0641ae167.slice. Jan 17 12:11:26.667077 sshd[4428]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:26.677428 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:40680.service: Deactivated successfully. Jan 17 12:11:26.679276 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:11:26.681026 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:11:26.685675 systemd[1]: Started sshd@27-10.0.0.51:22-10.0.0.1:40682.service - OpenSSH per-connection server daemon (10.0.0.1:40682). Jan 17 12:11:26.686693 systemd-logind[1445]: Removed session 27. Jan 17 12:11:26.698439 kubelet[2585]: I0117 12:11:26.698400 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a925a8d7-59d5-4337-b15a-2da0641ae167-hubble-tls\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698538 kubelet[2585]: I0117 12:11:26.698459 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-etc-cni-netd\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698538 kubelet[2585]: I0117 12:11:26.698488 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a925a8d7-59d5-4337-b15a-2da0641ae167-clustermesh-secrets\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698538 kubelet[2585]: I0117 12:11:26.698511 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-host-proc-sys-kernel\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698538 kubelet[2585]: I0117 12:11:26.698532 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-xtables-lock\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698634 kubelet[2585]: I0117 12:11:26.698551 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-cilium-run\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698634 kubelet[2585]: I0117 12:11:26.698571 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-lib-modules\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698634 kubelet[2585]: I0117 12:11:26.698590 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a925a8d7-59d5-4337-b15a-2da0641ae167-cilium-config-path\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698634 kubelet[2585]: I0117 12:11:26.698609 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-bpf-maps\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698634 kubelet[2585]: I0117 12:11:26.698630 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-host-proc-sys-net\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698740 kubelet[2585]: I0117 12:11:26.698651 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-cilium-cgroup\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698740 kubelet[2585]: I0117 12:11:26.698673 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-cni-path\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698740 kubelet[2585]: I0117 12:11:26.698694 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a925a8d7-59d5-4337-b15a-2da0641ae167-cilium-ipsec-secrets\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698806 kubelet[2585]: I0117 12:11:26.698754 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a925a8d7-59d5-4337-b15a-2da0641ae167-hostproc\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.698806 kubelet[2585]: I0117 12:11:26.698784 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lzn5\" (UniqueName: \"kubernetes.io/projected/a925a8d7-59d5-4337-b15a-2da0641ae167-kube-api-access-8lzn5\") pod \"cilium-gdk5q\" (UID: \"a925a8d7-59d5-4337-b15a-2da0641ae167\") " pod="kube-system/cilium-gdk5q" Jan 17 12:11:26.713874 sshd[4436]: Accepted publickey for core from 10.0.0.1 port 40682 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:11:26.715309 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:26.719276 systemd-logind[1445]: New session 28 of user core. Jan 17 12:11:26.727590 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:11:26.919281 kubelet[2585]: E0117 12:11:26.919128 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:26.919857 containerd[1463]: time="2025-01-17T12:11:26.919808600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdk5q,Uid:a925a8d7-59d5-4337-b15a-2da0641ae167,Namespace:kube-system,Attempt:0,}" Jan 17 12:11:26.943889 containerd[1463]: time="2025-01-17T12:11:26.943759637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:11:26.943889 containerd[1463]: time="2025-01-17T12:11:26.943830818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:11:26.943889 containerd[1463]: time="2025-01-17T12:11:26.943843290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:26.944071 containerd[1463]: time="2025-01-17T12:11:26.943935631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:11:26.964702 systemd[1]: Started cri-containerd-effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4.scope - libcontainer container effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4. Jan 17 12:11:26.988375 containerd[1463]: time="2025-01-17T12:11:26.988334474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gdk5q,Uid:a925a8d7-59d5-4337-b15a-2da0641ae167,Namespace:kube-system,Attempt:0,} returns sandbox id \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\"" Jan 17 12:11:26.989423 kubelet[2585]: E0117 12:11:26.989395 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:26.992298 containerd[1463]: time="2025-01-17T12:11:26.992249311Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:11:27.007887 containerd[1463]: time="2025-01-17T12:11:27.007812668Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7\"" Jan 17 12:11:27.008589 containerd[1463]: time="2025-01-17T12:11:27.008532114Z" level=info msg="StartContainer for \"7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7\"" Jan 17 12:11:27.043709 systemd[1]: Started cri-containerd-7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7.scope - libcontainer container 7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7. Jan 17 12:11:27.073436 containerd[1463]: time="2025-01-17T12:11:27.073376527Z" level=info msg="StartContainer for \"7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7\" returns successfully" Jan 17 12:11:27.083763 systemd[1]: cri-containerd-7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7.scope: Deactivated successfully. Jan 17 12:11:27.119642 containerd[1463]: time="2025-01-17T12:11:27.119548284Z" level=info msg="shim disconnected" id=7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7 namespace=k8s.io Jan 17 12:11:27.119642 containerd[1463]: time="2025-01-17T12:11:27.119625347Z" level=warning msg="cleaning up after shim disconnected" id=7fcefad959731c3008098c326b92241fdc7b3ee568c63fab24e0a9e3e7f757d7 namespace=k8s.io Jan 17 12:11:27.119642 containerd[1463]: time="2025-01-17T12:11:27.119634584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:27.131687 kubelet[2585]: E0117 12:11:27.131640 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:28.134252 kubelet[2585]: E0117 12:11:28.134205 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:28.136099 containerd[1463]: time="2025-01-17T12:11:28.136064707Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:11:28.215224 containerd[1463]: time="2025-01-17T12:11:28.215098971Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617\"" Jan 17 12:11:28.216019 containerd[1463]: time="2025-01-17T12:11:28.215893307Z" level=info msg="StartContainer for \"6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617\"" Jan 17 12:11:28.245618 systemd[1]: Started cri-containerd-6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617.scope - libcontainer container 6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617. Jan 17 12:11:28.280485 containerd[1463]: time="2025-01-17T12:11:28.280412204Z" level=info msg="StartContainer for \"6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617\" returns successfully" Jan 17 12:11:28.282937 systemd[1]: cri-containerd-6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617.scope: Deactivated successfully. Jan 17 12:11:28.452002 containerd[1463]: time="2025-01-17T12:11:28.451834679Z" level=info msg="shim disconnected" id=6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617 namespace=k8s.io Jan 17 12:11:28.452002 containerd[1463]: time="2025-01-17T12:11:28.451895400Z" level=warning msg="cleaning up after shim disconnected" id=6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617 namespace=k8s.io Jan 17 12:11:28.452002 containerd[1463]: time="2025-01-17T12:11:28.451905279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:28.805549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a4df0002ea646c8bb54da3df1597520716bc0edc7a8af7392a0b711475c2617-rootfs.mount: Deactivated successfully. Jan 17 12:11:29.138210 kubelet[2585]: E0117 12:11:29.138061 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:29.139786 containerd[1463]: time="2025-01-17T12:11:29.139732600Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:11:29.158655 containerd[1463]: time="2025-01-17T12:11:29.158605396Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a\"" Jan 17 12:11:29.159298 containerd[1463]: time="2025-01-17T12:11:29.159264391Z" level=info msg="StartContainer for \"6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a\"" Jan 17 12:11:29.203649 systemd[1]: Started cri-containerd-6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a.scope - libcontainer container 6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a. Jan 17 12:11:29.233747 containerd[1463]: time="2025-01-17T12:11:29.233706505Z" level=info msg="StartContainer for \"6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a\" returns successfully" Jan 17 12:11:29.235014 systemd[1]: cri-containerd-6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a.scope: Deactivated successfully. Jan 17 12:11:29.260280 containerd[1463]: time="2025-01-17T12:11:29.260212044Z" level=info msg="shim disconnected" id=6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a namespace=k8s.io Jan 17 12:11:29.260280 containerd[1463]: time="2025-01-17T12:11:29.260271614Z" level=warning msg="cleaning up after shim disconnected" id=6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a namespace=k8s.io Jan 17 12:11:29.260280 containerd[1463]: time="2025-01-17T12:11:29.260283446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:29.805003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6151e2002efd59f59bcfa929b26ceb88379da1f1a8526521019533d363c1a99a-rootfs.mount: Deactivated successfully. Jan 17 12:11:29.966743 kubelet[2585]: E0117 12:11:29.966696 2585 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:11:30.141402 kubelet[2585]: E0117 12:11:30.141097 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:30.145927 containerd[1463]: time="2025-01-17T12:11:30.145785125Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:11:30.161058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069926569.mount: Deactivated successfully. Jan 17 12:11:30.162348 containerd[1463]: time="2025-01-17T12:11:30.162298655Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d\"" Jan 17 12:11:30.162851 containerd[1463]: time="2025-01-17T12:11:30.162828032Z" level=info msg="StartContainer for \"e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d\"" Jan 17 12:11:30.190613 systemd[1]: Started cri-containerd-e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d.scope - libcontainer container e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d. Jan 17 12:11:30.214904 systemd[1]: cri-containerd-e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d.scope: Deactivated successfully. Jan 17 12:11:30.217176 containerd[1463]: time="2025-01-17T12:11:30.217139498Z" level=info msg="StartContainer for \"e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d\" returns successfully" Jan 17 12:11:30.249120 containerd[1463]: time="2025-01-17T12:11:30.249026464Z" level=info msg="shim disconnected" id=e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d namespace=k8s.io Jan 17 12:11:30.249120 containerd[1463]: time="2025-01-17T12:11:30.249096453Z" level=warning msg="cleaning up after shim disconnected" id=e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d namespace=k8s.io Jan 17 12:11:30.249120 containerd[1463]: time="2025-01-17T12:11:30.249107382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:11:30.805495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31b66e0455b9bb8627479d80558c54867e433b96cb3a6c7518fdc661aa5471d-rootfs.mount: Deactivated successfully. Jan 17 12:11:31.146709 kubelet[2585]: E0117 12:11:31.146558 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:31.148907 containerd[1463]: time="2025-01-17T12:11:31.148859332Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:11:31.463766 containerd[1463]: time="2025-01-17T12:11:31.463715660Z" level=info msg="CreateContainer within sandbox \"effd25214819c28cc242f5a05305156c69997e4b688ef5a79bc3dfe345f983f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48\"" Jan 17 12:11:31.464839 containerd[1463]: time="2025-01-17T12:11:31.464800632Z" level=info msg="StartContainer for \"c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48\"" Jan 17 12:11:31.496601 systemd[1]: Started cri-containerd-c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48.scope - libcontainer container c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48. Jan 17 12:11:31.526556 containerd[1463]: time="2025-01-17T12:11:31.526517318Z" level=info msg="StartContainer for \"c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48\" returns successfully" Jan 17 12:11:31.968738 kubelet[2585]: I0117 12:11:31.968672 2585 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:11:31Z","lastTransitionTime":"2025-01-17T12:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:11:32.052477 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:11:32.151175 kubelet[2585]: E0117 12:11:32.151145 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:32.163285 kubelet[2585]: I0117 12:11:32.163218 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gdk5q" podStartSLOduration=6.163191844 podStartE2EDuration="6.163191844s" podCreationTimestamp="2025-01-17 12:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:11:32.162745941 +0000 UTC m=+92.354681481" watchObservedRunningTime="2025-01-17 12:11:32.163191844 +0000 UTC m=+92.355127374" Jan 17 12:11:32.896630 kubelet[2585]: E0117 12:11:32.896580 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:33.152931 kubelet[2585]: E0117 12:11:33.152773 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:34.896783 kubelet[2585]: E0117 12:11:34.896713 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:35.134894 systemd[1]: run-containerd-runc-k8s.io-c6eb7ac814196a3ebadd00d48f9edf2e30ab246d4b45137fee5fd2b9a491fb48-runc.Y2m9kC.mount: Deactivated successfully. Jan 17 12:11:35.144223 systemd-networkd[1385]: lxc_health: Link UP Jan 17 12:11:35.154183 systemd-networkd[1385]: lxc_health: Gained carrier Jan 17 12:11:36.921572 kubelet[2585]: E0117 12:11:36.921533 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:37.117668 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jan 17 12:11:37.183684 kubelet[2585]: E0117 12:11:37.183417 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:11:41.574843 sshd[4436]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:41.578649 systemd[1]: sshd@27-10.0.0.51:22-10.0.0.1:40682.service: Deactivated successfully. Jan 17 12:11:41.580509 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:11:41.581186 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:11:41.582222 systemd-logind[1445]: Removed session 28.