Oct 8 19:56:48.959321 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 19:56:48.959363 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:48.959377 kernel: BIOS-provided physical RAM map: Oct 8 19:56:48.959385 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 19:56:48.959393 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 8 19:56:48.959402 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 8 19:56:48.959412 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 8 19:56:48.959421 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 8 19:56:48.959430 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 8 19:56:48.959438 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 8 19:56:48.959448 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 8 19:56:48.959467 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 8 19:56:48.959473 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 8 19:56:48.959479 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 8 19:56:48.959487 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 8 19:56:48.959494 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 8 19:56:48.959503 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 8 19:56:48.959510 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 8 19:56:48.959517 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 8 19:56:48.959523 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 19:56:48.959530 kernel: NX (Execute Disable) protection: active Oct 8 19:56:48.959537 kernel: APIC: Static calls initialized Oct 8 19:56:48.959543 kernel: efi: EFI v2.7 by EDK II Oct 8 19:56:48.959550 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 8 19:56:48.959557 kernel: SMBIOS 2.8 present. Oct 8 19:56:48.959564 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 8 19:56:48.959570 kernel: Hypervisor detected: KVM Oct 8 19:56:48.959579 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 19:56:48.959586 kernel: kvm-clock: using sched offset of 4283752958 cycles Oct 8 19:56:48.959594 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 19:56:48.959601 kernel: tsc: Detected 2794.748 MHz processor Oct 8 19:56:48.959608 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 19:56:48.959615 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 19:56:48.959622 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 8 19:56:48.959629 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 19:56:48.959636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 19:56:48.959647 kernel: Using GB pages for direct mapping Oct 8 19:56:48.959655 kernel: Secure boot disabled Oct 8 19:56:48.959665 kernel: ACPI: Early table checksum verification disabled Oct 8 19:56:48.959675 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 8 19:56:48.959690 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:56:48.959701 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959711 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959721 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 8 19:56:48.959729 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959737 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959744 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959751 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:56:48.959758 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:56:48.959765 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 8 19:56:48.959776 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 8 19:56:48.959783 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 8 19:56:48.959790 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 8 19:56:48.959797 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 8 19:56:48.959804 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 8 19:56:48.959811 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 8 19:56:48.959818 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 8 19:56:48.959825 kernel: No NUMA configuration found Oct 8 19:56:48.959832 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 8 19:56:48.959842 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 8 19:56:48.959849 kernel: Zone ranges: Oct 8 19:56:48.959856 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 19:56:48.959863 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 8 19:56:48.959873 kernel: Normal empty Oct 8 19:56:48.959882 kernel: Movable zone start for each node Oct 8 19:56:48.959892 kernel: Early memory node ranges Oct 8 19:56:48.959899 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 19:56:48.959906 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 8 19:56:48.959913 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 8 19:56:48.959923 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 8 19:56:48.959931 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 8 19:56:48.959938 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 8 19:56:48.959946 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 8 19:56:48.959953 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:56:48.959961 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 19:56:48.959968 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 8 19:56:48.959976 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 19:56:48.959983 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 8 19:56:48.959994 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 8 19:56:48.960002 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 8 19:56:48.960009 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 19:56:48.960017 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 19:56:48.960024 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 19:56:48.960032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 19:56:48.960039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 19:56:48.960047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 19:56:48.960054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 19:56:48.960064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 19:56:48.960072 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 19:56:48.960080 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 19:56:48.960087 kernel: TSC deadline timer available Oct 8 19:56:48.960094 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 19:56:48.960102 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 19:56:48.960110 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 19:56:48.960117 kernel: kvm-guest: setup PV sched yield Oct 8 19:56:48.960125 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 19:56:48.960135 kernel: Booting paravirtualized kernel on KVM Oct 8 19:56:48.960155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 19:56:48.960165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 19:56:48.960176 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 19:56:48.960185 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 19:56:48.960193 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 19:56:48.960199 kernel: kvm-guest: PV spinlocks enabled Oct 8 19:56:48.960207 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 19:56:48.960223 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:48.960246 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:56:48.960256 kernel: random: crng init done Oct 8 19:56:48.960266 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:56:48.960275 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:56:48.960284 kernel: Fallback order for Node 0: 0 Oct 8 19:56:48.960291 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 8 19:56:48.960298 kernel: Policy zone: DMA32 Oct 8 19:56:48.960305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:56:48.960312 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Oct 8 19:56:48.960329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:56:48.960339 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 19:56:48.960348 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 19:56:48.960358 kernel: Dynamic Preempt: voluntary Oct 8 19:56:48.960382 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:56:48.960395 kernel: rcu: RCU event tracing is enabled. Oct 8 19:56:48.960403 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:56:48.960411 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:56:48.960421 kernel: Rude variant of Tasks RCU enabled. Oct 8 19:56:48.960432 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:56:48.960442 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:56:48.960480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:56:48.960492 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 19:56:48.960499 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:56:48.960509 kernel: Console: colour dummy device 80x25 Oct 8 19:56:48.960520 kernel: printk: console [ttyS0] enabled Oct 8 19:56:48.960538 kernel: ACPI: Core revision 20230628 Oct 8 19:56:48.960549 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 19:56:48.960559 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 19:56:48.960569 kernel: x2apic enabled Oct 8 19:56:48.960579 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 19:56:48.960588 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 19:56:48.960597 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 19:56:48.960607 kernel: kvm-guest: setup PV IPIs Oct 8 19:56:48.960617 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 19:56:48.960631 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 19:56:48.960642 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 19:56:48.960652 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 19:56:48.960662 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 19:56:48.960672 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 19:56:48.960682 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 19:56:48.960692 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 19:56:48.960703 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 19:56:48.960713 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 19:56:48.960727 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 19:56:48.960737 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 19:56:48.960747 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 19:56:48.960757 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 19:56:48.960767 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 19:56:48.960778 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 19:56:48.960789 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 19:56:48.960800 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 19:56:48.960816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 19:56:48.960832 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 19:56:48.960842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 19:56:48.960850 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 19:56:48.960857 kernel: Freeing SMP alternatives memory: 32K Oct 8 19:56:48.960865 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:56:48.960872 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:56:48.960883 kernel: landlock: Up and running. Oct 8 19:56:48.960893 kernel: SELinux: Initializing. Oct 8 19:56:48.960904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:56:48.960918 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:56:48.960928 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 19:56:48.960937 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:48.960945 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:48.960952 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:56:48.960960 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 19:56:48.960971 kernel: ... version: 0 Oct 8 19:56:48.960982 kernel: ... bit width: 48 Oct 8 19:56:48.960995 kernel: ... generic registers: 6 Oct 8 19:56:48.961006 kernel: ... value mask: 0000ffffffffffff Oct 8 19:56:48.961016 kernel: ... max period: 00007fffffffffff Oct 8 19:56:48.961027 kernel: ... fixed-purpose events: 0 Oct 8 19:56:48.961037 kernel: ... event mask: 000000000000003f Oct 8 19:56:48.961048 kernel: signal: max sigframe size: 1776 Oct 8 19:56:48.961058 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:56:48.961069 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:56:48.961080 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:56:48.961090 kernel: smpboot: x86: Booting SMP configuration: Oct 8 19:56:48.961103 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 19:56:48.961113 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:56:48.961123 kernel: smpboot: Max logical packages: 1 Oct 8 19:56:48.961133 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 19:56:48.961154 kernel: devtmpfs: initialized Oct 8 19:56:48.961164 kernel: x86/mm: Memory block size: 128MB Oct 8 19:56:48.961174 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 8 19:56:48.961184 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 8 19:56:48.961195 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 8 19:56:48.961209 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 8 19:56:48.961219 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 8 19:56:48.961229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:56:48.961239 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:56:48.961250 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:56:48.961260 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:56:48.961270 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:56:48.961280 kernel: audit: type=2000 audit(1728417408.332:1): state=initialized audit_enabled=0 res=1 Oct 8 19:56:48.961290 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:56:48.961304 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 19:56:48.961314 kernel: cpuidle: using governor menu Oct 8 19:56:48.961325 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:56:48.961336 kernel: dca service started, version 1.12.1 Oct 8 19:56:48.961346 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 19:56:48.961356 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 19:56:48.961367 kernel: PCI: Using configuration type 1 for base access Oct 8 19:56:48.961377 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 19:56:48.961391 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:56:48.961402 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:56:48.961412 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:56:48.961423 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:56:48.961434 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:56:48.961444 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:56:48.961471 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:56:48.961481 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:56:48.961492 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:56:48.961511 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 19:56:48.961521 kernel: ACPI: Interpreter enabled Oct 8 19:56:48.961531 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 19:56:48.961542 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 19:56:48.961552 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 19:56:48.961563 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 19:56:48.961574 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 19:56:48.961585 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:56:48.961873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:56:48.962041 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 19:56:48.962203 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 19:56:48.962218 kernel: PCI host bridge to bus 0000:00 Oct 8 19:56:48.962375 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 19:56:48.962532 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 19:56:48.962663 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 19:56:48.962827 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 19:56:48.962957 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 19:56:48.963084 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 8 19:56:48.963225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:56:48.963397 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 19:56:48.963574 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 19:56:48.963720 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 8 19:56:48.963868 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 8 19:56:48.964009 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 8 19:56:48.964161 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 8 19:56:48.964305 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 19:56:48.964487 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:56:48.964645 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 8 19:56:48.964793 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 8 19:56:48.964959 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 8 19:56:48.965124 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 19:56:48.965275 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 8 19:56:48.965400 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 8 19:56:48.965560 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 8 19:56:48.965732 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 19:56:48.965897 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 8 19:56:48.966030 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 8 19:56:48.966177 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 8 19:56:48.966302 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 8 19:56:48.966446 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 19:56:48.966636 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 19:56:48.966788 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 19:56:48.966919 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 8 19:56:48.967089 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 8 19:56:48.967353 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 19:56:48.967563 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 8 19:56:48.967575 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 19:56:48.967583 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 19:56:48.967590 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 19:56:48.967597 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 19:56:48.967610 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 19:56:48.967617 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 19:56:48.967625 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 19:56:48.967632 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 19:56:48.967640 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 19:56:48.967647 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 19:56:48.967654 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 19:56:48.967662 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 19:56:48.967669 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 19:56:48.967679 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 19:56:48.967687 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 19:56:48.967694 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 19:56:48.967701 kernel: iommu: Default domain type: Translated Oct 8 19:56:48.967709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 19:56:48.967717 kernel: efivars: Registered efivars operations Oct 8 19:56:48.967724 kernel: PCI: Using ACPI for IRQ routing Oct 8 19:56:48.967732 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 19:56:48.967739 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 8 19:56:48.967750 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 8 19:56:48.967757 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 8 19:56:48.967764 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 8 19:56:48.967883 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 19:56:48.968002 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 19:56:48.968119 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 19:56:48.968129 kernel: vgaarb: loaded Oct 8 19:56:48.968137 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 19:56:48.968153 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 19:56:48.968165 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 19:56:48.968176 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:56:48.968186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:56:48.968196 kernel: pnp: PnP ACPI init Oct 8 19:56:48.968379 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 19:56:48.968399 kernel: pnp: PnP ACPI: found 6 devices Oct 8 19:56:48.968411 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 19:56:48.968421 kernel: NET: Registered PF_INET protocol family Oct 8 19:56:48.968437 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:56:48.968446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:56:48.968491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:56:48.968499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:56:48.968507 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:56:48.968514 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:56:48.968522 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:56:48.968529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:56:48.968540 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:56:48.968547 kernel: NET: Registered PF_XDP protocol family Oct 8 19:56:48.968697 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 8 19:56:48.968844 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 8 19:56:48.968987 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 19:56:48.969107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 19:56:48.969255 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 19:56:48.969413 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 19:56:48.969582 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 19:56:48.969719 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 8 19:56:48.969734 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:56:48.969744 kernel: Initialise system trusted keyrings Oct 8 19:56:48.969754 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:56:48.969764 kernel: Key type asymmetric registered Oct 8 19:56:48.969774 kernel: Asymmetric key parser 'x509' registered Oct 8 19:56:48.969784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 19:56:48.969794 kernel: io scheduler mq-deadline registered Oct 8 19:56:48.969813 kernel: io scheduler kyber registered Oct 8 19:56:48.969827 kernel: io scheduler bfq registered Oct 8 19:56:48.969839 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 19:56:48.969856 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 19:56:48.969867 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 19:56:48.969877 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 19:56:48.969888 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:56:48.969898 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 19:56:48.969908 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 19:56:48.969923 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 19:56:48.969933 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 19:56:48.970094 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 19:56:48.970284 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 19:56:48.970300 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 19:56:48.970446 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T19:56:48 UTC (1728417408) Oct 8 19:56:48.970631 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 19:56:48.970649 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 19:56:48.970667 kernel: efifb: probing for efifb Oct 8 19:56:48.970677 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 8 19:56:48.970684 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 8 19:56:48.970691 kernel: efifb: scrolling: redraw Oct 8 19:56:48.970699 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 8 19:56:48.970706 kernel: Console: switching to colour frame buffer device 100x37 Oct 8 19:56:48.970748 kernel: fb0: EFI VGA frame buffer device Oct 8 19:56:48.970758 kernel: pstore: Using crash dump compression: deflate Oct 8 19:56:48.970766 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 19:56:48.970776 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:56:48.970783 kernel: Segment Routing with IPv6 Oct 8 19:56:48.970791 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:56:48.970799 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:56:48.970806 kernel: Key type dns_resolver registered Oct 8 19:56:48.970814 kernel: IPI shorthand broadcast: enabled Oct 8 19:56:48.970824 kernel: sched_clock: Marking stable (684005060, 177921249)->(890742460, -28816151) Oct 8 19:56:48.970831 kernel: registered taskstats version 1 Oct 8 19:56:48.970839 kernel: Loading compiled-in X.509 certificates Oct 8 19:56:48.970849 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 19:56:48.970857 kernel: Key type .fscrypt registered Oct 8 19:56:48.970865 kernel: Key type fscrypt-provisioning registered Oct 8 19:56:48.970872 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:56:48.970880 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:56:48.970887 kernel: ima: No architecture policies found Oct 8 19:56:48.970895 kernel: clk: Disabling unused clocks Oct 8 19:56:48.970902 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 19:56:48.970910 kernel: Write protecting the kernel read-only data: 36864k Oct 8 19:56:48.970920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 19:56:48.970927 kernel: Run /init as init process Oct 8 19:56:48.970936 kernel: with arguments: Oct 8 19:56:48.970947 kernel: /init Oct 8 19:56:48.970956 kernel: with environment: Oct 8 19:56:48.970964 kernel: HOME=/ Oct 8 19:56:48.970971 kernel: TERM=linux Oct 8 19:56:48.970979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:56:48.970989 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:56:48.971002 systemd[1]: Detected virtualization kvm. Oct 8 19:56:48.971011 systemd[1]: Detected architecture x86-64. Oct 8 19:56:48.971019 systemd[1]: Running in initrd. Oct 8 19:56:48.971029 systemd[1]: No hostname configured, using default hostname. Oct 8 19:56:48.971039 systemd[1]: Hostname set to . Oct 8 19:56:48.971047 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:56:48.971056 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:56:48.971064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:56:48.971072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:56:48.971081 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:56:48.971090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:56:48.971098 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:56:48.971109 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:56:48.971119 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:56:48.971127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:56:48.971135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:56:48.971159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:56:48.971167 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:56:48.971178 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:56:48.971186 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:56:48.971194 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:56:48.971202 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:56:48.971210 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:56:48.971218 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:56:48.971226 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:56:48.971235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:56:48.971243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:56:48.971253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:56:48.971262 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:56:48.971274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:56:48.971283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:56:48.971291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:56:48.971299 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:56:48.971307 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:56:48.971316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:56:48.971324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:48.971336 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:56:48.971355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:56:48.971366 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:56:48.971402 systemd-journald[193]: Collecting audit messages is disabled. Oct 8 19:56:48.971426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:56:48.971435 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:48.971444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:48.971466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:56:48.971498 systemd-journald[193]: Journal started Oct 8 19:56:48.971517 systemd-journald[193]: Runtime Journal (/run/log/journal/e12c5ff2b8734adfbbd56f9246063509) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:56:48.972945 systemd-modules-load[194]: Inserted module 'overlay' Oct 8 19:56:48.975062 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:56:48.989989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:56:48.992373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:56:48.998384 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:49.002644 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:56:49.012484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:56:49.013477 kernel: Bridge firewalling registered Oct 8 19:56:49.013559 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 8 19:56:49.015586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:56:49.017264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:56:49.019869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:56:49.025036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:56:49.033002 dracut-cmdline[219]: dracut-dracut-053 Oct 8 19:56:49.036988 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 19:56:49.039310 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:56:49.052676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:56:49.107153 systemd-resolved[248]: Positive Trust Anchors: Oct 8 19:56:49.107182 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:56:49.107216 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:56:49.111122 systemd-resolved[248]: Defaulting to hostname 'linux'. Oct 8 19:56:49.112812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:56:49.119249 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:56:49.152494 kernel: SCSI subsystem initialized Oct 8 19:56:49.167497 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:56:49.185533 kernel: iscsi: registered transport (tcp) Oct 8 19:56:49.211493 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:56:49.211580 kernel: QLogic iSCSI HBA Driver Oct 8 19:56:49.279987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:56:49.294298 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:56:49.330667 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:56:49.330751 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:56:49.331983 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:56:49.379511 kernel: raid6: avx2x4 gen() 22584 MB/s Oct 8 19:56:49.416537 kernel: raid6: avx2x2 gen() 28012 MB/s Oct 8 19:56:49.433898 kernel: raid6: avx2x1 gen() 17563 MB/s Oct 8 19:56:49.433975 kernel: raid6: using algorithm avx2x2 gen() 28012 MB/s Oct 8 19:56:49.459496 kernel: raid6: .... xor() 16438 MB/s, rmw enabled Oct 8 19:56:49.459580 kernel: raid6: using avx2x2 recovery algorithm Oct 8 19:56:49.488512 kernel: xor: automatically using best checksumming function avx Oct 8 19:56:49.656518 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:56:49.669788 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:56:49.678744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:56:49.699558 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 8 19:56:49.705473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:56:49.721655 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:56:49.738979 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Oct 8 19:56:49.773801 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:56:49.790710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:56:49.861064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:56:49.954519 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 19:56:49.954779 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 19:56:49.956487 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:56:49.966869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:56:49.966936 kernel: GPT:9289727 != 19775487 Oct 8 19:56:49.966950 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:56:49.966962 kernel: GPT:9289727 != 19775487 Oct 8 19:56:49.966975 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:56:49.968707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:49.968776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:56:49.969158 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:49.972618 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:49.981517 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 19:56:49.981583 kernel: libata version 3.00 loaded. Oct 8 19:56:49.982656 kernel: AES CTR mode by8 optimization enabled Oct 8 19:56:49.985597 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:56:49.985709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:56:49.985934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:49.991950 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:49.998175 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 19:56:49.998397 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 19:56:49.999009 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 19:56:50.001604 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 19:56:50.006533 kernel: scsi host0: ahci Oct 8 19:56:50.006804 kernel: scsi host1: ahci Oct 8 19:56:50.005823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:50.015464 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Oct 8 19:56:50.015494 kernel: scsi host2: ahci Oct 8 19:56:50.015859 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (474) Oct 8 19:56:50.015876 kernel: scsi host3: ahci Oct 8 19:56:50.016056 kernel: scsi host4: ahci Oct 8 19:56:50.011207 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:56:50.023064 kernel: scsi host5: ahci Oct 8 19:56:50.023312 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 8 19:56:50.023329 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 8 19:56:50.023342 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 8 19:56:50.023355 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 8 19:56:50.023368 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 8 19:56:50.023394 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 8 19:56:50.035282 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:56:50.038167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:50.053530 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:56:50.093563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:56:50.100421 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:56:50.100557 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:56:50.103901 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:56:50.104219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:56:50.104755 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:56:50.124616 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:56:50.126580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:56:50.128973 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:56:50.145064 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:56:50.152619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:50.245215 disk-uuid[549]: Primary Header is updated. Oct 8 19:56:50.245215 disk-uuid[549]: Secondary Entries is updated. Oct 8 19:56:50.245215 disk-uuid[549]: Secondary Header is updated. Oct 8 19:56:50.275492 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:50.280496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:50.333476 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:50.333530 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:50.333540 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 19:56:50.334674 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:50.335500 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:50.336472 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 19:56:50.337943 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 19:56:50.337965 kernel: ata3.00: applying bridge limits Oct 8 19:56:50.339495 kernel: ata3.00: configured for UDMA/100 Oct 8 19:56:50.341491 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:56:50.392496 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 19:56:50.392831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:56:50.411499 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:56:51.282480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:56:51.282868 disk-uuid[564]: The operation has completed successfully. Oct 8 19:56:51.314141 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:56:51.314311 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:56:51.346761 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:56:51.350849 sh[592]: Success Oct 8 19:56:51.364483 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 19:56:51.401886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:56:51.421477 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:56:51.423795 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:56:51.439009 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 19:56:51.439081 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:51.439097 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:56:51.440307 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:56:51.442023 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:56:51.446214 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:56:51.447980 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:56:51.462701 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:56:51.464762 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:56:51.478710 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:51.478769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:51.478780 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:51.482494 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:51.492557 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:56:51.494593 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:51.503944 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:56:51.511667 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:56:51.571106 ignition[688]: Ignition 2.19.0 Oct 8 19:56:51.571119 ignition[688]: Stage: fetch-offline Oct 8 19:56:51.571173 ignition[688]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:51.571184 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:51.571282 ignition[688]: parsed url from cmdline: "" Oct 8 19:56:51.571285 ignition[688]: no config URL provided Oct 8 19:56:51.571291 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:56:51.571300 ignition[688]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:56:51.571327 ignition[688]: op(1): [started] loading QEMU firmware config module Oct 8 19:56:51.571333 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:56:51.581110 ignition[688]: op(1): [finished] loading QEMU firmware config module Oct 8 19:56:51.599451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:56:51.608609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:56:51.625189 ignition[688]: parsing config with SHA512: d940d028b18e7f7ba576d47ca4de1e39e7a8c65d9680030fba3d98b0c922ae9c3f7b5214bf954c8c928c2fa253d1aa001ab2118f2ecb929270cb8524ce6d0933 Oct 8 19:56:51.631275 systemd-networkd[782]: lo: Link UP Oct 8 19:56:51.631288 systemd-networkd[782]: lo: Gained carrier Oct 8 19:56:51.631617 unknown[688]: fetched base config from "system" Oct 8 19:56:51.632418 ignition[688]: fetch-offline: fetch-offline passed Oct 8 19:56:51.631629 unknown[688]: fetched user config from "qemu" Oct 8 19:56:51.632547 ignition[688]: Ignition finished successfully Oct 8 19:56:51.632955 systemd-networkd[782]: Enumeration completed Oct 8 19:56:51.633147 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:56:51.633497 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:51.633501 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:56:51.634541 systemd-networkd[782]: eth0: Link UP Oct 8 19:56:51.634544 systemd-networkd[782]: eth0: Gained carrier Oct 8 19:56:51.634551 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:51.636146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:56:51.638805 systemd[1]: Reached target network.target - Network. Oct 8 19:56:51.640017 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:56:51.647558 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:56:51.647726 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:56:51.664550 ignition[786]: Ignition 2.19.0 Oct 8 19:56:51.664563 ignition[786]: Stage: kargs Oct 8 19:56:51.664784 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:51.664797 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:51.665822 ignition[786]: kargs: kargs passed Oct 8 19:56:51.665875 ignition[786]: Ignition finished successfully Oct 8 19:56:51.671104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:56:51.688855 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:56:51.704259 ignition[796]: Ignition 2.19.0 Oct 8 19:56:51.704271 ignition[796]: Stage: disks Oct 8 19:56:51.704438 ignition[796]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:51.704463 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:51.708442 ignition[796]: disks: disks passed Oct 8 19:56:51.709166 ignition[796]: Ignition finished successfully Oct 8 19:56:51.712215 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:56:51.712560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:56:51.712868 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:56:51.713205 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:56:51.713748 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:56:51.714083 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:56:51.728699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:56:51.742491 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.67 Oct 8 19:56:51.742509 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Oct 8 19:56:51.744303 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:56:51.751149 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:56:51.765564 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:56:51.859479 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 19:56:51.859697 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:56:51.860375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:56:51.867680 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:56:51.870248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:56:51.872320 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:56:51.872374 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:56:51.872402 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:56:51.880891 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Oct 8 19:56:51.883487 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:51.883529 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:51.883544 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:51.887480 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:51.889717 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:56:51.889941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:56:51.893952 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:56:51.935887 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:56:51.942155 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:56:51.948375 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:56:51.954391 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:56:52.044381 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:56:52.057651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:56:52.060825 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:56:52.075470 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:52.096811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:56:52.124070 ignition[930]: INFO : Ignition 2.19.0 Oct 8 19:56:52.124070 ignition[930]: INFO : Stage: mount Oct 8 19:56:52.149137 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:52.149137 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:52.151604 ignition[930]: INFO : mount: mount passed Oct 8 19:56:52.152375 ignition[930]: INFO : Ignition finished successfully Oct 8 19:56:52.155910 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:56:52.168590 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:56:52.437903 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:56:52.450737 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:56:52.459242 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Oct 8 19:56:52.459277 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 19:56:52.459288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 19:56:52.460776 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:56:52.463477 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:56:52.465361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:56:52.493219 ignition[956]: INFO : Ignition 2.19.0 Oct 8 19:56:52.493219 ignition[956]: INFO : Stage: files Oct 8 19:56:52.495181 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:52.495181 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:52.495181 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:56:52.499133 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:56:52.499133 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:56:52.499133 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:56:52.499133 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:56:52.499133 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:56:52.498829 unknown[956]: wrote ssh authorized keys file for user: core Oct 8 19:56:52.507445 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:56:52.507445 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 19:56:52.539624 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:56:52.615412 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 19:56:52.615412 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:56:52.619694 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 19:56:52.899702 systemd-networkd[782]: eth0: Gained IPv6LL Oct 8 19:56:53.105731 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:56:53.192359 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:56:53.192359 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:56:53.196635 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 19:56:53.612554 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:56:53.905549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 19:56:53.905549 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:56:53.918345 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:56:53.918345 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 19:56:53.918345 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:56:53.945868 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:56:53.951214 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:56:53.953324 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:56:53.953324 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:56:53.956793 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:56:53.958662 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:56:53.960941 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:56:53.963058 ignition[956]: INFO : files: files passed Oct 8 19:56:53.964027 ignition[956]: INFO : Ignition finished successfully Oct 8 19:56:53.967586 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:56:53.973771 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:56:53.977583 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:56:53.980798 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:56:53.982041 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:56:53.987699 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:56:53.991797 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:53.991797 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:53.995640 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:56:53.998836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:56:54.002602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:56:54.014676 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:56:54.038481 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:56:54.039615 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:56:54.042255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:56:54.044331 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:56:54.046379 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:56:54.061677 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:56:54.085730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:56:54.089737 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:56:54.103706 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:56:54.105060 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:56:54.107403 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:56:54.109532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:56:54.109650 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:56:54.112111 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:56:54.113765 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:56:54.130890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:56:54.132946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:56:54.135025 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:56:54.137315 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:56:54.139514 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:56:54.141814 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:56:54.143911 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:56:54.146205 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:56:54.148076 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:56:54.148199 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:56:54.150642 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:56:54.152101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:56:54.154199 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:56:54.154349 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:56:54.156478 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:56:54.156590 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:56:54.159027 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:56:54.159136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:56:54.160989 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:56:54.162768 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:56:54.162949 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:56:54.165667 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:56:54.167660 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:56:54.169899 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:56:54.170056 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:56:54.172160 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:56:54.172303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:56:54.175988 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:56:54.176132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:56:54.178137 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:56:54.178245 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:56:54.188670 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:56:54.190450 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:56:54.190624 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:56:54.194593 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:56:54.196464 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:56:54.196745 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:56:54.206094 ignition[1012]: INFO : Ignition 2.19.0 Oct 8 19:56:54.206094 ignition[1012]: INFO : Stage: umount Oct 8 19:56:54.206094 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:56:54.206094 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:56:54.206094 ignition[1012]: INFO : umount: umount passed Oct 8 19:56:54.206094 ignition[1012]: INFO : Ignition finished successfully Oct 8 19:56:54.199148 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:56:54.199337 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:56:54.207180 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:56:54.207335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:56:54.209754 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:56:54.209885 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:56:54.213309 systemd[1]: Stopped target network.target - Network. Oct 8 19:56:54.215058 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:56:54.215128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:56:54.217269 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:56:54.217330 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:56:54.219550 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:56:54.219631 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:56:54.221581 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:56:54.221637 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:56:54.223929 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:56:54.227034 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:56:54.230549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:56:54.232517 systemd-networkd[782]: eth0: DHCPv6 lease lost Oct 8 19:56:54.235788 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:56:54.235958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:56:54.238390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:56:54.238476 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:56:54.245572 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:56:54.246962 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:56:54.247041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:56:54.248667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:56:54.251005 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:56:54.251160 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:56:54.266909 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:56:54.267147 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:56:54.281933 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:56:54.282019 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:56:54.282277 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:56:54.282315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:56:54.282796 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:56:54.282846 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:56:54.283523 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:56:54.283570 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:56:54.284343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:56:54.284389 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:56:54.316649 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:56:54.317922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:56:54.317983 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:56:54.320584 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:56:54.320632 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:56:54.322898 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:56:54.322946 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:56:54.325507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:56:54.325557 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:56:54.326570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:56:54.326624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:54.330279 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:56:54.330385 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:56:54.341685 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:56:54.341832 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:56:54.824311 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:56:54.824507 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:56:54.826878 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:56:54.830019 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:56:54.830128 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:56:54.842634 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:56:54.851850 systemd[1]: Switching root. Oct 8 19:56:54.891020 systemd-journald[193]: Journal stopped Oct 8 19:56:56.381471 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 8 19:56:56.381543 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:56:56.381562 kernel: SELinux: policy capability open_perms=1 Oct 8 19:56:56.381578 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:56:56.381593 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:56:56.381604 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:56:56.381620 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:56:56.381635 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:56:56.381656 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:56:56.381667 kernel: audit: type=1403 audit(1728417415.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:56:56.381684 systemd[1]: Successfully loaded SELinux policy in 40.006ms. Oct 8 19:56:56.381699 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.820ms. Oct 8 19:56:56.381712 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:56:56.381724 systemd[1]: Detected virtualization kvm. Oct 8 19:56:56.381737 systemd[1]: Detected architecture x86-64. Oct 8 19:56:56.381749 systemd[1]: Detected first boot. Oct 8 19:56:56.381760 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:56:56.381775 zram_generator::config[1056]: No configuration found. Oct 8 19:56:56.381788 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:56:56.381800 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:56:56.381811 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:56:56.381823 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:56:56.381836 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:56:56.381848 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:56:56.381860 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:56:56.381877 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:56:56.381889 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:56:56.381901 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:56:56.381913 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:56:56.381925 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:56:56.381945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:56:56.381957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:56:56.381969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:56:56.381980 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:56:56.381996 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:56:56.382009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:56:56.382022 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:56:56.382034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:56:56.382045 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:56:56.382056 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:56:56.382069 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:56:56.382080 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:56:56.382098 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:56:56.382110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:56:56.382122 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:56:56.382133 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:56:56.382145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:56:56.382157 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:56:56.382169 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:56:56.382181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:56:56.382193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:56:56.382207 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:56:56.382219 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:56:56.382230 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:56:56.382242 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:56:56.382254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:56.382267 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:56:56.382279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:56:56.382291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:56:56.382303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:56:56.382318 systemd[1]: Reached target machines.target - Containers. Oct 8 19:56:56.382329 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:56:56.382341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:56:56.382353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:56:56.382365 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:56:56.382377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:56:56.382388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:56:56.382400 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:56:56.382416 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:56:56.382429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:56:56.382440 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:56:56.382484 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:56:56.382499 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:56:56.382510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:56:56.382522 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:56:56.382533 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:56:56.382545 kernel: loop: module loaded Oct 8 19:56:56.382580 systemd-journald[1119]: Collecting audit messages is disabled. Oct 8 19:56:56.382608 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:56:56.382620 systemd-journald[1119]: Journal started Oct 8 19:56:56.382641 systemd-journald[1119]: Runtime Journal (/run/log/journal/e12c5ff2b8734adfbbd56f9246063509) is 6.0M, max 48.3M, 42.2M free. Oct 8 19:56:56.084919 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:56:56.100179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:56:56.100672 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:56:56.401492 kernel: fuse: init (API version 7.39) Oct 8 19:56:56.403468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:56:56.409474 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:56:56.416494 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:56:56.418869 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:56:56.418924 systemd[1]: Stopped verity-setup.service. Oct 8 19:56:56.424350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:56.424401 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:56:56.426372 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:56:56.427969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:56:56.429336 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:56:56.430527 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:56:56.431796 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:56:56.433034 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:56:56.434763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:56:56.436346 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:56:56.436548 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:56:56.438148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:56:56.438322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:56:56.439771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:56:56.439955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:56:56.441447 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:56:56.441730 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:56:56.443081 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:56:56.443249 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:56:56.444790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:56:56.446241 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:56:56.447773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:56:56.461401 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:56:56.465472 kernel: ACPI: bus type drm_connector registered Oct 8 19:56:56.471676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:56:56.474119 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:56:56.475378 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:56:56.475415 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:56:56.477483 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:56:56.479865 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:56:56.482191 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:56:56.483314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:56:56.496952 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:56:56.508330 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:56:56.509785 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:56:56.511264 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:56:56.512948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:56:56.531318 systemd-journald[1119]: Time spent on flushing to /var/log/journal/e12c5ff2b8734adfbbd56f9246063509 is 18.814ms for 990 entries. Oct 8 19:56:56.531318 systemd-journald[1119]: System Journal (/var/log/journal/e12c5ff2b8734adfbbd56f9246063509) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:56:56.615559 systemd-journald[1119]: Received client request to flush runtime journal. Oct 8 19:56:56.615604 kernel: loop0: detected capacity change from 0 to 142488 Oct 8 19:56:56.528724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:56:56.532707 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:56:56.536034 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:56:56.536203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:56:56.540154 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:56:56.541673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:56:56.543325 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:56:56.591378 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:56:56.593340 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:56:56.604786 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:56:56.606729 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:56:56.617980 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:56:56.619841 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:56:56.639827 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:56:56.641279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:56:56.651381 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:56:56.652245 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:56:56.667495 kernel: loop1: detected capacity change from 0 to 140768 Oct 8 19:56:56.679053 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:56:56.689808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:56:56.692141 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:56:56.697858 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:56:56.761933 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:56:56.766857 kernel: loop2: detected capacity change from 0 to 210664 Oct 8 19:56:56.813421 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Oct 8 19:56:56.813441 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Oct 8 19:56:56.825541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:56:56.827684 kernel: loop3: detected capacity change from 0 to 142488 Oct 8 19:56:56.849478 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 19:56:56.858486 kernel: loop5: detected capacity change from 0 to 210664 Oct 8 19:56:56.864324 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:56:56.865583 (sd-merge)[1196]: Merged extensions into '/usr'. Oct 8 19:56:56.895585 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:56:56.895605 systemd[1]: Reloading... Oct 8 19:56:56.973491 zram_generator::config[1219]: No configuration found. Oct 8 19:56:57.118123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:56:57.153675 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:56:57.174025 systemd[1]: Reloading finished in 277 ms. Oct 8 19:56:57.245299 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:56:57.247200 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:56:57.261859 systemd[1]: Starting ensure-sysext.service... Oct 8 19:56:57.264714 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:56:57.272330 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:56:57.272354 systemd[1]: Reloading... Oct 8 19:56:57.308057 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:56:57.309060 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:56:57.310613 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:56:57.312160 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 8 19:56:57.312349 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 8 19:56:57.324182 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:56:57.324198 systemd-tmpfiles[1261]: Skipping /boot Oct 8 19:56:57.337677 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:56:57.337692 systemd-tmpfiles[1261]: Skipping /boot Oct 8 19:56:57.356120 zram_generator::config[1291]: No configuration found. Oct 8 19:56:57.463282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:56:57.512988 systemd[1]: Reloading finished in 240 ms. Oct 8 19:56:57.532532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:56:57.627404 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:56:57.867660 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:56:57.871173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:56:57.876443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:56:57.915620 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:56:57.921314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.921547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:56:57.922858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:56:57.925361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:56:57.927910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:56:57.929068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:56:57.931015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:56:57.932091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.933116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:56:57.933308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:56:57.934953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:56:57.935139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:56:57.939667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.939904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:56:57.941356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:56:57.943619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:56:57.945007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:56:57.945184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.947516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.947779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:56:57.951702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:56:57.952894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:56:57.953020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 19:56:57.953790 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:56:57.954004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:56:58.009176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:56:58.009400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:56:58.011469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:56:58.011684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:56:58.013730 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:56:58.013950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:56:58.018039 systemd[1]: Finished ensure-sysext.service. Oct 8 19:56:58.023255 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:56:58.023333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:56:58.033646 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:56:58.066541 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:56:58.070548 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:56:58.090137 augenrules[1367]: No rules Oct 8 19:56:58.090210 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:56:58.092278 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:56:58.112805 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:56:58.118762 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:56:58.263281 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:56:58.264954 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:56:58.276039 systemd-resolved[1340]: Positive Trust Anchors: Oct 8 19:56:58.276056 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:56:58.276094 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:56:58.279746 systemd-resolved[1340]: Defaulting to hostname 'linux'. Oct 8 19:56:58.281436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:56:58.282729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:56:58.332499 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:56:58.359787 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:56:58.362558 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:56:58.383784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:56:58.387601 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Oct 8 19:56:58.435359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:56:58.447707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:56:58.489495 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1383) Oct 8 19:56:58.491477 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1384) Oct 8 19:56:58.494478 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1383) Oct 8 19:56:58.495070 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:56:58.513805 systemd-networkd[1387]: lo: Link UP Oct 8 19:56:58.513836 systemd-networkd[1387]: lo: Gained carrier Oct 8 19:56:58.516216 systemd-networkd[1387]: Enumeration completed Oct 8 19:56:58.516879 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:58.516885 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:56:58.517956 systemd-networkd[1387]: eth0: Link UP Oct 8 19:56:58.517961 systemd-networkd[1387]: eth0: Gained carrier Oct 8 19:56:58.517976 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:56:58.542943 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:56:58.555690 systemd[1]: Reached target network.target - Network. Oct 8 19:56:58.567713 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:56:58.568810 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:56:58.572387 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Oct 8 19:56:59.139123 systemd-resolved[1340]: Clock change detected. Flushing caches. Oct 8 19:56:59.139184 systemd-timesyncd[1357]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:56:59.139234 systemd-timesyncd[1357]: Initial clock synchronization to Tue 2024-10-08 19:56:59.139029 UTC. Oct 8 19:56:59.155112 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 19:56:59.160819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:56:59.165094 kernel: ACPI: button: Power Button [PWRF] Oct 8 19:56:59.204673 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 8 19:56:59.205095 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 19:56:59.205302 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 19:56:59.206746 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 19:56:59.208948 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:56:59.215074 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 8 19:56:59.231034 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:56:59.259070 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:56:59.260399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:59.271738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:56:59.272103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:59.284810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:56:59.363439 kernel: kvm_amd: TSC scaling supported Oct 8 19:56:59.363546 kernel: kvm_amd: Nested Virtualization enabled Oct 8 19:56:59.363565 kernel: kvm_amd: Nested Paging enabled Oct 8 19:56:59.364593 kernel: kvm_amd: LBR virtualization supported Oct 8 19:56:59.364638 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 19:56:59.366069 kernel: kvm_amd: Virtual GIF supported Oct 8 19:56:59.388077 kernel: EDAC MC: Ver: 3.0.0 Oct 8 19:56:59.398165 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:56:59.423004 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:56:59.438433 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:56:59.448856 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:56:59.490835 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:56:59.493212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:56:59.494478 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:56:59.495807 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:56:59.497167 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:56:59.498734 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:56:59.500011 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:56:59.501343 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:56:59.502627 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:56:59.502665 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:56:59.503602 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:56:59.505214 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:56:59.508399 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:56:59.521095 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:56:59.524087 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:56:59.525971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:56:59.527294 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:56:59.528340 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:56:59.529465 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:56:59.529495 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:56:59.530553 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:56:59.533066 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:56:59.537077 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:56:59.537476 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:56:59.542909 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:56:59.544077 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:56:59.548251 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:56:59.550416 jq[1434]: false Oct 8 19:56:59.553201 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:56:59.559286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:56:59.562764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:56:59.572067 extend-filesystems[1435]: Found loop3 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found loop4 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found loop5 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found sr0 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda1 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda2 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda3 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found usr Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda4 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda6 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda7 Oct 8 19:56:59.572067 extend-filesystems[1435]: Found vda9 Oct 8 19:56:59.572067 extend-filesystems[1435]: Checking size of /dev/vda9 Oct 8 19:56:59.575723 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:56:59.587243 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:56:59.587933 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:56:59.590214 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:56:59.595193 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:56:59.597777 dbus-daemon[1433]: [system] SELinux support is enabled Oct 8 19:56:59.598516 extend-filesystems[1435]: Resized partition /dev/vda9 Oct 8 19:56:59.600628 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:56:59.602640 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:56:59.607754 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:56:59.612071 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1403) Oct 8 19:56:59.620950 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:56:59.621589 jq[1455]: true Oct 8 19:56:59.621553 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:56:59.622513 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:56:59.623165 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:56:59.623473 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:56:59.628805 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:56:59.629687 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:56:59.637199 update_engine[1451]: I20241008 19:56:59.637100 1451 main.cc:92] Flatcar Update Engine starting Oct 8 19:56:59.639984 update_engine[1451]: I20241008 19:56:59.639935 1451 update_check_scheduler.cc:74] Next update check in 11m12s Oct 8 19:56:59.646675 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:56:59.652928 jq[1459]: true Oct 8 19:56:59.674535 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 19:56:59.674884 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 19:56:59.675373 systemd-logind[1446]: New seat seat0. Oct 8 19:56:59.679960 tar[1458]: linux-amd64/helm Oct 8 19:56:59.685472 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:56:59.694304 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:56:59.700511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:56:59.700669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:56:59.702337 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:56:59.702483 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:56:59.711372 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:56:59.792561 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:56:59.874607 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:57:00.061115 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:57:00.062250 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:57:00.062250 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:57:00.062250 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:57:00.069582 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Oct 8 19:57:00.062858 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:57:00.063252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:57:00.074278 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:57:00.076184 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:57:00.078289 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:57:00.091886 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:57:00.131543 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:57:00.143240 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:57:00.143499 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:57:00.162380 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:57:00.210413 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:57:00.218469 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:57:00.221952 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:57:00.225022 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:57:00.307102 containerd[1460]: time="2024-10-08T19:57:00.306921447Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:57:00.332232 containerd[1460]: time="2024-10-08T19:57:00.332178173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334042 containerd[1460]: time="2024-10-08T19:57:00.334007583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334042 containerd[1460]: time="2024-10-08T19:57:00.334034574Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:57:00.334126 containerd[1460]: time="2024-10-08T19:57:00.334064781Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:57:00.334296 containerd[1460]: time="2024-10-08T19:57:00.334272310Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:57:00.334296 containerd[1460]: time="2024-10-08T19:57:00.334294441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334403 containerd[1460]: time="2024-10-08T19:57:00.334381334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334443 containerd[1460]: time="2024-10-08T19:57:00.334402544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334654 containerd[1460]: time="2024-10-08T19:57:00.334628808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334654 containerd[1460]: time="2024-10-08T19:57:00.334646391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334721 containerd[1460]: time="2024-10-08T19:57:00.334658795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334721 containerd[1460]: time="2024-10-08T19:57:00.334667992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.334815 containerd[1460]: time="2024-10-08T19:57:00.334791894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.335137 containerd[1460]: time="2024-10-08T19:57:00.335117675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:57:00.335262 containerd[1460]: time="2024-10-08T19:57:00.335244233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:57:00.335296 containerd[1460]: time="2024-10-08T19:57:00.335259952Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:57:00.335391 containerd[1460]: time="2024-10-08T19:57:00.335377332Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:57:00.335463 containerd[1460]: time="2024-10-08T19:57:00.335448025Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:57:00.378452 containerd[1460]: time="2024-10-08T19:57:00.378371912Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:57:00.378565 containerd[1460]: time="2024-10-08T19:57:00.378490645Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:57:00.378565 containerd[1460]: time="2024-10-08T19:57:00.378513788Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:57:00.378565 containerd[1460]: time="2024-10-08T19:57:00.378535519Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:57:00.378643 containerd[1460]: time="2024-10-08T19:57:00.378562049Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:57:00.378874 containerd[1460]: time="2024-10-08T19:57:00.378847194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:57:00.380322 containerd[1460]: time="2024-10-08T19:57:00.380264512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:57:00.380545 containerd[1460]: time="2024-10-08T19:57:00.380526683Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:57:00.380568 containerd[1460]: time="2024-10-08T19:57:00.380550869Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:57:00.380587 containerd[1460]: time="2024-10-08T19:57:00.380571828Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:57:00.380605 containerd[1460]: time="2024-10-08T19:57:00.380590112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380624 containerd[1460]: time="2024-10-08T19:57:00.380606583Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380652 containerd[1460]: time="2024-10-08T19:57:00.380622002Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380652 containerd[1460]: time="2024-10-08T19:57:00.380638914Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380672096Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380693316Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380709697Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380728221Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380755943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380772334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380787963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380803543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380820955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380837226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.380867 containerd[1460]: time="2024-10-08T19:57:00.380856291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380873694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380890055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380908299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380923688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380939077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380954005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.380975715Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.381000802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.381014899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381070 containerd[1460]: time="2024-10-08T19:57:00.381028404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381095440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381116590Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381130055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381143751Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381156394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381171022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381186140Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:57:00.381236 containerd[1460]: time="2024-10-08T19:57:00.381199746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:57:00.381607 containerd[1460]: time="2024-10-08T19:57:00.381539192Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:57:00.381607 containerd[1460]: time="2024-10-08T19:57:00.381615165Z" level=info msg="Connect containerd service" Oct 8 19:57:00.381842 containerd[1460]: time="2024-10-08T19:57:00.381651733Z" level=info msg="using legacy CRI server" Oct 8 19:57:00.381842 containerd[1460]: time="2024-10-08T19:57:00.381659488Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:57:00.381842 containerd[1460]: time="2024-10-08T19:57:00.381801404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:57:00.382555 containerd[1460]: time="2024-10-08T19:57:00.382531283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:57:00.382871 containerd[1460]: time="2024-10-08T19:57:00.382817710Z" level=info msg="Start subscribing containerd event" Oct 8 19:57:00.382908 containerd[1460]: time="2024-10-08T19:57:00.382889064Z" level=info msg="Start recovering state" Oct 8 19:57:00.382990 containerd[1460]: time="2024-10-08T19:57:00.382968332Z" level=info msg="Start event monitor" Oct 8 19:57:00.383015 containerd[1460]: time="2024-10-08T19:57:00.383002627Z" level=info msg="Start snapshots syncer" Oct 8 19:57:00.383034 containerd[1460]: time="2024-10-08T19:57:00.383018727Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:57:00.383034 containerd[1460]: time="2024-10-08T19:57:00.383028084Z" level=info msg="Start streaming server" Oct 8 19:57:00.383302 containerd[1460]: time="2024-10-08T19:57:00.383277572Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:57:00.383379 containerd[1460]: time="2024-10-08T19:57:00.383359586Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:57:00.384447 containerd[1460]: time="2024-10-08T19:57:00.384202817Z" level=info msg="containerd successfully booted in 0.103881s" Oct 8 19:57:00.384352 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:57:00.440275 systemd-networkd[1387]: eth0: Gained IPv6LL Oct 8 19:57:00.443982 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:57:00.446010 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:57:00.454348 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:57:00.458676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:00.464329 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:57:00.486650 tar[1458]: linux-amd64/LICENSE Oct 8 19:57:00.486750 tar[1458]: linux-amd64/README.md Oct 8 19:57:00.492175 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:57:00.492458 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:57:00.495390 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:57:00.506412 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:57:00.513436 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:57:01.567117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:01.568966 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:57:01.570760 systemd[1]: Startup finished in 828ms (kernel) + 6.741s (initrd) + 5.585s (userspace) = 13.155s. Oct 8 19:57:01.573108 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:02.338852 kubelet[1545]: E1008 19:57:02.338745 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:02.342923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:02.343160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:02.343478 systemd[1]: kubelet.service: Consumed 1.715s CPU time. Oct 8 19:57:03.886940 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:57:03.897537 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:41980.service - OpenSSH per-connection server daemon (10.0.0.1:41980). Oct 8 19:57:03.948124 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 41980 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:03.950728 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:03.960509 systemd-logind[1446]: New session 1 of user core. Oct 8 19:57:03.961792 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:57:03.969398 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:57:03.984667 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:57:04.001577 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:57:04.004920 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:57:04.132531 systemd[1563]: Queued start job for default target default.target. Oct 8 19:57:04.142863 systemd[1563]: Created slice app.slice - User Application Slice. Oct 8 19:57:04.142895 systemd[1563]: Reached target paths.target - Paths. Oct 8 19:57:04.142914 systemd[1563]: Reached target timers.target - Timers. Oct 8 19:57:04.145017 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:57:04.157886 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:57:04.158095 systemd[1563]: Reached target sockets.target - Sockets. Oct 8 19:57:04.158121 systemd[1563]: Reached target basic.target - Basic System. Oct 8 19:57:04.158178 systemd[1563]: Reached target default.target - Main User Target. Oct 8 19:57:04.158225 systemd[1563]: Startup finished in 145ms. Oct 8 19:57:04.158949 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:57:04.174343 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:57:04.236071 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:41986.service - OpenSSH per-connection server daemon (10.0.0.1:41986). Oct 8 19:57:04.280290 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 41986 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.282423 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.287568 systemd-logind[1446]: New session 2 of user core. Oct 8 19:57:04.297285 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:57:04.354384 sshd[1574]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.362581 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:41986.service: Deactivated successfully. Oct 8 19:57:04.364950 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:57:04.367085 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:57:04.379380 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:41994.service - OpenSSH per-connection server daemon (10.0.0.1:41994). Oct 8 19:57:04.380509 systemd-logind[1446]: Removed session 2. Oct 8 19:57:04.414266 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 41994 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.415960 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.420988 systemd-logind[1446]: New session 3 of user core. Oct 8 19:57:04.434452 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:57:04.485780 sshd[1581]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.493674 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:41994.service: Deactivated successfully. Oct 8 19:57:04.495202 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:57:04.496759 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:57:04.497935 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:42006.service - OpenSSH per-connection server daemon (10.0.0.1:42006). Oct 8 19:57:04.498684 systemd-logind[1446]: Removed session 3. Oct 8 19:57:04.534676 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 42006 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.536091 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.540206 systemd-logind[1446]: New session 4 of user core. Oct 8 19:57:04.550194 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:57:04.604381 sshd[1588]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.614247 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:42006.service: Deactivated successfully. Oct 8 19:57:04.616299 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:57:04.617900 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:57:04.628331 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:42012.service - OpenSSH per-connection server daemon (10.0.0.1:42012). Oct 8 19:57:04.629566 systemd-logind[1446]: Removed session 4. Oct 8 19:57:04.662960 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 42012 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.664665 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.669491 systemd-logind[1446]: New session 5 of user core. Oct 8 19:57:04.679277 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:57:04.739132 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:57:04.739580 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:04.759775 sudo[1598]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:04.761970 sshd[1595]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.777260 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:42012.service: Deactivated successfully. Oct 8 19:57:04.779784 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:57:04.781960 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:57:04.794440 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:42022.service - OpenSSH per-connection server daemon (10.0.0.1:42022). Oct 8 19:57:04.795643 systemd-logind[1446]: Removed session 5. Oct 8 19:57:04.830606 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 42022 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:04.832693 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:04.836954 systemd-logind[1446]: New session 6 of user core. Oct 8 19:57:04.847329 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:57:04.903680 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:57:04.904021 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:04.908289 sudo[1607]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:04.915340 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:57:04.915758 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:04.940292 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:04.942038 auditctl[1610]: No rules Oct 8 19:57:04.943275 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:57:04.943530 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:04.945345 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:57:04.976678 augenrules[1628]: No rules Oct 8 19:57:04.978595 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:57:04.979895 sudo[1606]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:04.981722 sshd[1603]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:04.993811 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:42022.service: Deactivated successfully. Oct 8 19:57:04.995323 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:57:04.995949 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:57:05.004422 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). Oct 8 19:57:05.005259 systemd-logind[1446]: Removed session 6. Oct 8 19:57:05.040012 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:57:05.041571 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:57:05.045417 systemd-logind[1446]: New session 7 of user core. Oct 8 19:57:05.055306 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:57:05.109215 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:57:05.109561 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:57:05.710325 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:57:05.710497 (dockerd)[1657]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:57:07.265378 dockerd[1657]: time="2024-10-08T19:57:07.265314278Z" level=info msg="Starting up" Oct 8 19:57:07.846232 dockerd[1657]: time="2024-10-08T19:57:07.846159349Z" level=info msg="Loading containers: start." Oct 8 19:57:08.579076 kernel: Initializing XFRM netlink socket Oct 8 19:57:08.678509 systemd-networkd[1387]: docker0: Link UP Oct 8 19:57:08.935821 dockerd[1657]: time="2024-10-08T19:57:08.935696220Z" level=info msg="Loading containers: done." Oct 8 19:57:08.956168 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck586571108-merged.mount: Deactivated successfully. Oct 8 19:57:09.204666 dockerd[1657]: time="2024-10-08T19:57:09.204539357Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:57:09.204785 dockerd[1657]: time="2024-10-08T19:57:09.204695339Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:57:09.204869 dockerd[1657]: time="2024-10-08T19:57:09.204845060Z" level=info msg="Daemon has completed initialization" Oct 8 19:57:09.430475 dockerd[1657]: time="2024-10-08T19:57:09.430149657Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:57:09.430717 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:57:10.516080 containerd[1460]: time="2024-10-08T19:57:10.516010784Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:57:12.493072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:57:12.499430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:12.504492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468024584.mount: Deactivated successfully. Oct 8 19:57:12.766878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:12.772379 (kubelet)[1835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:12.932790 kubelet[1835]: E1008 19:57:12.932653 1835 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:12.941267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:12.941595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:14.100157 containerd[1460]: time="2024-10-08T19:57:14.100071788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:14.101349 containerd[1460]: time="2024-10-08T19:57:14.101067154Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 8 19:57:14.102389 containerd[1460]: time="2024-10-08T19:57:14.102314243Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:14.106025 containerd[1460]: time="2024-10-08T19:57:14.105987361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:14.107499 containerd[1460]: time="2024-10-08T19:57:14.107451247Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 3.591363749s" Oct 8 19:57:14.107540 containerd[1460]: time="2024-10-08T19:57:14.107505058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 19:57:14.131606 containerd[1460]: time="2024-10-08T19:57:14.131561592Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:57:16.563166 containerd[1460]: time="2024-10-08T19:57:16.563024914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.575771 containerd[1460]: time="2024-10-08T19:57:16.575661823Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 8 19:57:16.579804 containerd[1460]: time="2024-10-08T19:57:16.579756321Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.588455 containerd[1460]: time="2024-10-08T19:57:16.588278754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:16.589158 containerd[1460]: time="2024-10-08T19:57:16.589069477Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 2.457440478s" Oct 8 19:57:16.589158 containerd[1460]: time="2024-10-08T19:57:16.589136432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 19:57:16.618149 containerd[1460]: time="2024-10-08T19:57:16.618085192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:57:18.762852 containerd[1460]: time="2024-10-08T19:57:18.762764351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:18.765576 containerd[1460]: time="2024-10-08T19:57:18.765515620Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 8 19:57:18.767624 containerd[1460]: time="2024-10-08T19:57:18.767583959Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:18.771765 containerd[1460]: time="2024-10-08T19:57:18.771705889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:18.773025 containerd[1460]: time="2024-10-08T19:57:18.772984096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 2.154850514s" Oct 8 19:57:18.773092 containerd[1460]: time="2024-10-08T19:57:18.773027347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 19:57:18.809242 containerd[1460]: time="2024-10-08T19:57:18.809199573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:57:21.787528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465444954.mount: Deactivated successfully. Oct 8 19:57:23.056439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:57:23.066396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:23.239491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:23.247364 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:23.542546 kubelet[1926]: E1008 19:57:23.542389 1926 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:23.546861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:23.547085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:24.020800 containerd[1460]: time="2024-10-08T19:57:24.020693294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:24.026941 containerd[1460]: time="2024-10-08T19:57:24.026851313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 8 19:57:24.028722 containerd[1460]: time="2024-10-08T19:57:24.028680463Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:24.034743 containerd[1460]: time="2024-10-08T19:57:24.032195645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:24.034743 containerd[1460]: time="2024-10-08T19:57:24.033598276Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 5.224352306s" Oct 8 19:57:24.034743 containerd[1460]: time="2024-10-08T19:57:24.033630706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 19:57:24.062449 containerd[1460]: time="2024-10-08T19:57:24.062399268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:57:24.947029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749902315.mount: Deactivated successfully. Oct 8 19:57:26.323773 containerd[1460]: time="2024-10-08T19:57:26.323668735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.405847 containerd[1460]: time="2024-10-08T19:57:26.405714781Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 19:57:26.416777 containerd[1460]: time="2024-10-08T19:57:26.416619661Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.422986 containerd[1460]: time="2024-10-08T19:57:26.422893387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:26.424222 containerd[1460]: time="2024-10-08T19:57:26.424144673Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.361692507s" Oct 8 19:57:26.424222 containerd[1460]: time="2024-10-08T19:57:26.424212461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 19:57:26.454613 containerd[1460]: time="2024-10-08T19:57:26.454567637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:57:27.159112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946222544.mount: Deactivated successfully. Oct 8 19:57:27.166353 containerd[1460]: time="2024-10-08T19:57:27.166270574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:27.167515 containerd[1460]: time="2024-10-08T19:57:27.167460887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 19:57:27.169098 containerd[1460]: time="2024-10-08T19:57:27.169058012Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:27.171961 containerd[1460]: time="2024-10-08T19:57:27.171873782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:27.172939 containerd[1460]: time="2024-10-08T19:57:27.172885530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 718.272608ms" Oct 8 19:57:27.172939 containerd[1460]: time="2024-10-08T19:57:27.172934502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 19:57:27.199702 containerd[1460]: time="2024-10-08T19:57:27.199642549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:57:28.211706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3561534964.mount: Deactivated successfully. Oct 8 19:57:33.556463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:57:33.614255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:33.877695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:33.883212 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:57:35.074368 kubelet[2057]: E1008 19:57:35.074239 2057 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:57:35.078412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:57:35.078651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:57:35.988185 containerd[1460]: time="2024-10-08T19:57:35.988102070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:35.992514 containerd[1460]: time="2024-10-08T19:57:35.992418369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 8 19:57:35.994189 containerd[1460]: time="2024-10-08T19:57:35.994149959Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:35.997740 containerd[1460]: time="2024-10-08T19:57:35.997687073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:57:35.999175 containerd[1460]: time="2024-10-08T19:57:35.999132364Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 8.799437247s" Oct 8 19:57:35.999239 containerd[1460]: time="2024-10-08T19:57:35.999176780Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 19:57:38.762624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:38.775282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:38.794731 systemd[1]: Reloading requested from client PID 2150 ('systemctl') (unit session-7.scope)... Oct 8 19:57:38.794747 systemd[1]: Reloading... Oct 8 19:57:38.893170 zram_generator::config[2190]: No configuration found. Oct 8 19:57:39.384872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:39.466853 systemd[1]: Reloading finished in 671 ms. Oct 8 19:57:39.528028 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:57:39.528162 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:57:39.528456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:39.531160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:39.691204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:39.697400 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:57:39.739167 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:39.739167 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:57:39.739167 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:39.739584 kubelet[2238]: I1008 19:57:39.739284 2238 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:57:40.101983 kubelet[2238]: I1008 19:57:40.101932 2238 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:57:40.101983 kubelet[2238]: I1008 19:57:40.101964 2238 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:57:40.102214 kubelet[2238]: I1008 19:57:40.102185 2238 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:57:40.116834 kubelet[2238]: I1008 19:57:40.116688 2238 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:57:40.117605 kubelet[2238]: E1008 19:57:40.117574 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.133139 kubelet[2238]: I1008 19:57:40.133088 2238 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:57:40.134106 kubelet[2238]: I1008 19:57:40.134062 2238 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:57:40.134354 kubelet[2238]: I1008 19:57:40.134097 2238 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:57:40.134512 kubelet[2238]: I1008 19:57:40.134375 2238 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:57:40.134512 kubelet[2238]: I1008 19:57:40.134392 2238 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:57:40.134584 kubelet[2238]: I1008 19:57:40.134565 2238 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:40.135589 kubelet[2238]: I1008 19:57:40.135555 2238 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:57:40.135589 kubelet[2238]: I1008 19:57:40.135580 2238 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:57:40.135698 kubelet[2238]: I1008 19:57:40.135621 2238 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:57:40.135698 kubelet[2238]: I1008 19:57:40.135669 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:57:40.138707 kubelet[2238]: W1008 19:57:40.138648 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.138707 kubelet[2238]: W1008 19:57:40.138669 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.138790 kubelet[2238]: E1008 19:57:40.138729 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.138790 kubelet[2238]: E1008 19:57:40.138730 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.140885 kubelet[2238]: I1008 19:57:40.140848 2238 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:57:40.142353 kubelet[2238]: I1008 19:57:40.142325 2238 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:57:40.142420 kubelet[2238]: W1008 19:57:40.142410 2238 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:57:40.143330 kubelet[2238]: I1008 19:57:40.143185 2238 server.go:1264] "Started kubelet" Oct 8 19:57:40.143330 kubelet[2238]: I1008 19:57:40.143298 2238 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:57:40.143575 kubelet[2238]: I1008 19:57:40.143506 2238 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:57:40.144204 kubelet[2238]: I1008 19:57:40.143859 2238 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:57:40.145702 kubelet[2238]: I1008 19:57:40.145440 2238 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:57:40.147151 kubelet[2238]: I1008 19:57:40.146289 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:57:40.147405 kubelet[2238]: I1008 19:57:40.147380 2238 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:57:40.147562 kubelet[2238]: I1008 19:57:40.147506 2238 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:57:40.147613 kubelet[2238]: I1008 19:57:40.147568 2238 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:57:40.148607 kubelet[2238]: W1008 19:57:40.147900 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.148607 kubelet[2238]: E1008 19:57:40.147940 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.148607 kubelet[2238]: E1008 19:57:40.148247 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Oct 8 19:57:40.148990 kubelet[2238]: I1008 19:57:40.148969 2238 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:57:40.149527 kubelet[2238]: E1008 19:57:40.148767 2238 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc9285bf6a6469 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:40.143154281 +0000 UTC m=+0.441308138,LastTimestamp:2024-10-08 19:57:40.143154281 +0000 UTC m=+0.441308138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:40.149527 kubelet[2238]: E1008 19:57:40.149424 2238 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:57:40.150199 kubelet[2238]: I1008 19:57:40.150176 2238 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:57:40.150199 kubelet[2238]: I1008 19:57:40.150195 2238 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:57:40.165689 kubelet[2238]: I1008 19:57:40.165632 2238 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:57:40.165689 kubelet[2238]: I1008 19:57:40.165661 2238 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:57:40.165689 kubelet[2238]: I1008 19:57:40.165679 2238 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:40.167561 kubelet[2238]: I1008 19:57:40.167494 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:57:40.169231 kubelet[2238]: I1008 19:57:40.169199 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:57:40.169295 kubelet[2238]: I1008 19:57:40.169251 2238 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:57:40.169295 kubelet[2238]: I1008 19:57:40.169276 2238 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:57:40.169525 kubelet[2238]: E1008 19:57:40.169331 2238 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:57:40.169997 kubelet[2238]: W1008 19:57:40.169945 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.170044 kubelet[2238]: E1008 19:57:40.170002 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:40.249094 kubelet[2238]: I1008 19:57:40.249035 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:40.249487 kubelet[2238]: E1008 19:57:40.249459 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 8 19:57:40.269608 kubelet[2238]: E1008 19:57:40.269565 2238 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:40.349472 kubelet[2238]: E1008 19:57:40.349398 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Oct 8 19:57:40.451002 kubelet[2238]: I1008 19:57:40.450903 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:40.451382 kubelet[2238]: E1008 19:57:40.451333 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 8 19:57:40.470623 kubelet[2238]: E1008 19:57:40.470528 2238 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:40.751032 kubelet[2238]: E1008 19:57:40.750879 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Oct 8 19:57:40.853924 kubelet[2238]: I1008 19:57:40.853874 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:40.854384 kubelet[2238]: E1008 19:57:40.854339 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 8 19:57:40.871495 kubelet[2238]: E1008 19:57:40.871451 2238 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:57:41.026945 kubelet[2238]: I1008 19:57:41.026778 2238 policy_none.go:49] "None policy: Start" Oct 8 19:57:41.027980 kubelet[2238]: I1008 19:57:41.027937 2238 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:57:41.027980 kubelet[2238]: I1008 19:57:41.027988 2238 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:57:41.042511 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:57:41.064537 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:57:41.067992 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:57:41.082467 kubelet[2238]: I1008 19:57:41.082283 2238 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:57:41.082785 kubelet[2238]: I1008 19:57:41.082581 2238 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:57:41.082785 kubelet[2238]: I1008 19:57:41.082763 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:57:41.084373 kubelet[2238]: E1008 19:57:41.084324 2238 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:57:41.405320 kubelet[2238]: W1008 19:57:41.405248 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.405320 kubelet[2238]: E1008 19:57:41.405327 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.460418 kubelet[2238]: W1008 19:57:41.460361 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.460418 kubelet[2238]: E1008 19:57:41.460415 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.552343 kubelet[2238]: E1008 19:57:41.552278 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Oct 8 19:57:41.595845 kubelet[2238]: W1008 19:57:41.595784 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.595845 kubelet[2238]: E1008 19:57:41.595849 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.645936 kubelet[2238]: W1008 19:57:41.645850 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.645936 kubelet[2238]: E1008 19:57:41.645934 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:41.656361 kubelet[2238]: I1008 19:57:41.656244 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:41.656636 kubelet[2238]: E1008 19:57:41.656598 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 8 19:57:41.672012 kubelet[2238]: I1008 19:57:41.671950 2238 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:57:41.673338 kubelet[2238]: I1008 19:57:41.673319 2238 topology_manager.go:215] "Topology Admit Handler" podUID="6c8e0f972be671a2ebac6f9e8395098d" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:57:41.674252 kubelet[2238]: I1008 19:57:41.674231 2238 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:57:41.680014 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 8 19:57:41.691872 systemd[1]: Created slice kubepods-burstable-pod6c8e0f972be671a2ebac6f9e8395098d.slice - libcontainer container kubepods-burstable-pod6c8e0f972be671a2ebac6f9e8395098d.slice. Oct 8 19:57:41.701898 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 8 19:57:41.755994 kubelet[2238]: I1008 19:57:41.755904 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:41.755994 kubelet[2238]: I1008 19:57:41.755964 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:57:41.755994 kubelet[2238]: I1008 19:57:41.755990 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:41.756519 kubelet[2238]: I1008 19:57:41.756038 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:41.756519 kubelet[2238]: I1008 19:57:41.756090 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:41.756519 kubelet[2238]: I1008 19:57:41.756115 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:41.756519 kubelet[2238]: I1008 19:57:41.756135 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:41.756519 kubelet[2238]: I1008 19:57:41.756155 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:41.756634 kubelet[2238]: I1008 19:57:41.756180 2238 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:41.989523 kubelet[2238]: E1008 19:57:41.989370 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:41.990207 containerd[1460]: time="2024-10-08T19:57:41.990139005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:42.000477 kubelet[2238]: E1008 19:57:42.000443 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:42.001004 containerd[1460]: time="2024-10-08T19:57:42.000960881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c8e0f972be671a2ebac6f9e8395098d,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:42.004197 kubelet[2238]: E1008 19:57:42.004165 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:42.004747 containerd[1460]: time="2024-10-08T19:57:42.004702280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 8 19:57:42.136749 kubelet[2238]: E1008 19:57:42.136694 2238 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:43.153783 kubelet[2238]: E1008 19:57:43.153720 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="3.2s" Oct 8 19:57:43.258560 kubelet[2238]: I1008 19:57:43.258517 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:43.258987 kubelet[2238]: E1008 19:57:43.258938 2238 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Oct 8 19:57:43.384109 kubelet[2238]: W1008 19:57:43.384015 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:43.384109 kubelet[2238]: E1008 19:57:43.384116 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.062013 kubelet[2238]: W1008 19:57:44.061945 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.062013 kubelet[2238]: E1008 19:57:44.062002 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.207425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333865033.mount: Deactivated successfully. Oct 8 19:57:44.416357 update_engine[1451]: I20241008 19:57:44.416110 1451 update_attempter.cc:509] Updating boot flags... Oct 8 19:57:44.427323 kubelet[2238]: W1008 19:57:44.427267 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.427718 kubelet[2238]: E1008 19:57:44.427331 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.488102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2283) Oct 8 19:57:44.549121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2285) Oct 8 19:57:44.565967 kubelet[2238]: W1008 19:57:44.565866 2238 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.565967 kubelet[2238]: E1008 19:57:44.565909 2238 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Oct 8 19:57:44.601089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2285) Oct 8 19:57:44.635884 containerd[1460]: time="2024-10-08T19:57:44.635835164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:44.711079 containerd[1460]: time="2024-10-08T19:57:44.710908178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:57:44.731458 containerd[1460]: time="2024-10-08T19:57:44.731415294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:44.750121 containerd[1460]: time="2024-10-08T19:57:44.750073980Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:44.779916 containerd[1460]: time="2024-10-08T19:57:44.779857993Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:44.804738 containerd[1460]: time="2024-10-08T19:57:44.804670676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:57:44.834813 containerd[1460]: time="2024-10-08T19:57:44.834719573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 19:57:44.877516 containerd[1460]: time="2024-10-08T19:57:44.877444011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:57:44.878386 containerd[1460]: time="2024-10-08T19:57:44.878303744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.877268991s" Oct 8 19:57:44.920163 containerd[1460]: time="2024-10-08T19:57:44.920102056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.929880513s" Oct 8 19:57:44.949475 containerd[1460]: time="2024-10-08T19:57:44.949394947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.944592105s" Oct 8 19:57:45.222128 containerd[1460]: time="2024-10-08T19:57:45.221991074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:45.225026 containerd[1460]: time="2024-10-08T19:57:45.224661502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:45.225026 containerd[1460]: time="2024-10-08T19:57:45.224686439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.225026 containerd[1460]: time="2024-10-08T19:57:45.224817227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.226886 containerd[1460]: time="2024-10-08T19:57:45.226241130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:45.226886 containerd[1460]: time="2024-10-08T19:57:45.226863881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:45.226989 containerd[1460]: time="2024-10-08T19:57:45.226882266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.227124 containerd[1460]: time="2024-10-08T19:57:45.226966736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.235832 containerd[1460]: time="2024-10-08T19:57:45.235711981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:57:45.235832 containerd[1460]: time="2024-10-08T19:57:45.235770874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:57:45.235832 containerd[1460]: time="2024-10-08T19:57:45.235790190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.236100 containerd[1460]: time="2024-10-08T19:57:45.235866245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:57:45.272151 systemd[1]: run-containerd-runc-k8s.io-bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62-runc.mw7UKA.mount: Deactivated successfully. Oct 8 19:57:45.286355 systemd[1]: Started cri-containerd-19bb4d60d5d61c2738618a7782dc03b709163e4510ea288510a5b21af1b7ee8e.scope - libcontainer container 19bb4d60d5d61c2738618a7782dc03b709163e4510ea288510a5b21af1b7ee8e. Oct 8 19:57:45.288216 systemd[1]: Started cri-containerd-bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62.scope - libcontainer container bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62. Oct 8 19:57:45.293068 systemd[1]: Started cri-containerd-0e3f056e388101b6c44d00bc9cbc4bfc86e2b9cc82331d7ada3253621d64e771.scope - libcontainer container 0e3f056e388101b6c44d00bc9cbc4bfc86e2b9cc82331d7ada3253621d64e771. Oct 8 19:57:45.399950 containerd[1460]: time="2024-10-08T19:57:45.399725131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c8e0f972be671a2ebac6f9e8395098d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62\"" Oct 8 19:57:45.401171 containerd[1460]: time="2024-10-08T19:57:45.401138643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"19bb4d60d5d61c2738618a7782dc03b709163e4510ea288510a5b21af1b7ee8e\"" Oct 8 19:57:45.401277 kubelet[2238]: E1008 19:57:45.401243 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.402766 kubelet[2238]: E1008 19:57:45.402678 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.405769 containerd[1460]: time="2024-10-08T19:57:45.405732891Z" level=info msg="CreateContainer within sandbox \"bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:57:45.406765 containerd[1460]: time="2024-10-08T19:57:45.406729833Z" level=info msg="CreateContainer within sandbox \"19bb4d60d5d61c2738618a7782dc03b709163e4510ea288510a5b21af1b7ee8e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:57:45.411382 containerd[1460]: time="2024-10-08T19:57:45.411338809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e3f056e388101b6c44d00bc9cbc4bfc86e2b9cc82331d7ada3253621d64e771\"" Oct 8 19:57:45.411967 kubelet[2238]: E1008 19:57:45.411941 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:45.414258 containerd[1460]: time="2024-10-08T19:57:45.414223223Z" level=info msg="CreateContainer within sandbox \"0e3f056e388101b6c44d00bc9cbc4bfc86e2b9cc82331d7ada3253621d64e771\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:57:45.460739 containerd[1460]: time="2024-10-08T19:57:45.460656437Z" level=info msg="CreateContainer within sandbox \"19bb4d60d5d61c2738618a7782dc03b709163e4510ea288510a5b21af1b7ee8e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf83284bfab062764d80ee1c228eeceac8e232564a5518e3f876275946462ac1\"" Oct 8 19:57:45.461876 containerd[1460]: time="2024-10-08T19:57:45.461848418Z" level=info msg="StartContainer for \"bf83284bfab062764d80ee1c228eeceac8e232564a5518e3f876275946462ac1\"" Oct 8 19:57:45.466832 containerd[1460]: time="2024-10-08T19:57:45.466803101Z" level=info msg="CreateContainer within sandbox \"bb0b085c31e13e3fcba864c41be525c6dfaa89dfee8254a5908f8f61f9582c62\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbcec4531d13eb303d61f0a98cf2ec448f80a22317b04ce275d4a2580a9982e0\"" Oct 8 19:57:45.467491 containerd[1460]: time="2024-10-08T19:57:45.467410745Z" level=info msg="StartContainer for \"fbcec4531d13eb303d61f0a98cf2ec448f80a22317b04ce275d4a2580a9982e0\"" Oct 8 19:57:45.470940 containerd[1460]: time="2024-10-08T19:57:45.470902200Z" level=info msg="CreateContainer within sandbox \"0e3f056e388101b6c44d00bc9cbc4bfc86e2b9cc82331d7ada3253621d64e771\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"884e120574c7b04e55dcb5369e04ca691d26491e08df889e08685ab99bd5d722\"" Oct 8 19:57:45.473366 containerd[1460]: time="2024-10-08T19:57:45.473240928Z" level=info msg="StartContainer for \"884e120574c7b04e55dcb5369e04ca691d26491e08df889e08685ab99bd5d722\"" Oct 8 19:57:45.502245 systemd[1]: Started cri-containerd-bf83284bfab062764d80ee1c228eeceac8e232564a5518e3f876275946462ac1.scope - libcontainer container bf83284bfab062764d80ee1c228eeceac8e232564a5518e3f876275946462ac1. Oct 8 19:57:45.506605 systemd[1]: Started cri-containerd-884e120574c7b04e55dcb5369e04ca691d26491e08df889e08685ab99bd5d722.scope - libcontainer container 884e120574c7b04e55dcb5369e04ca691d26491e08df889e08685ab99bd5d722. Oct 8 19:57:45.508488 systemd[1]: Started cri-containerd-fbcec4531d13eb303d61f0a98cf2ec448f80a22317b04ce275d4a2580a9982e0.scope - libcontainer container fbcec4531d13eb303d61f0a98cf2ec448f80a22317b04ce275d4a2580a9982e0. Oct 8 19:57:45.945549 containerd[1460]: time="2024-10-08T19:57:45.944582819Z" level=info msg="StartContainer for \"884e120574c7b04e55dcb5369e04ca691d26491e08df889e08685ab99bd5d722\" returns successfully" Oct 8 19:57:45.945549 containerd[1460]: time="2024-10-08T19:57:45.944645116Z" level=info msg="StartContainer for \"fbcec4531d13eb303d61f0a98cf2ec448f80a22317b04ce275d4a2580a9982e0\" returns successfully" Oct 8 19:57:45.945549 containerd[1460]: time="2024-10-08T19:57:45.944606673Z" level=info msg="StartContainer for \"bf83284bfab062764d80ee1c228eeceac8e232564a5518e3f876275946462ac1\" returns successfully" Oct 8 19:57:46.217868 kubelet[2238]: E1008 19:57:46.216757 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:46.219849 kubelet[2238]: E1008 19:57:46.219571 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:46.221083 kubelet[2238]: E1008 19:57:46.220974 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:46.462473 kubelet[2238]: I1008 19:57:46.462431 2238 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:46.978251 kubelet[2238]: E1008 19:57:46.978196 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:57:47.202697 kubelet[2238]: I1008 19:57:47.202641 2238 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:57:47.222682 kubelet[2238]: E1008 19:57:47.222652 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:47.393990 kubelet[2238]: E1008 19:57:47.393709 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc9285bf6a6469 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:40.143154281 +0000 UTC m=+0.441308138,LastTimestamp:2024-10-08 19:57:40.143154281 +0000 UTC m=+0.441308138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:47.411316 kubelet[2238]: E1008 19:57:47.410935 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:47.511610 kubelet[2238]: E1008 19:57:47.511541 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:47.612263 kubelet[2238]: E1008 19:57:47.612206 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:47.683912 kubelet[2238]: E1008 19:57:47.683714 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc9285bfc9dfc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:40.149411778 +0000 UTC m=+0.447565635,LastTimestamp:2024-10-08 19:57:40.149411778 +0000 UTC m=+0.447565635,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:47.712813 kubelet[2238]: E1008 19:57:47.712764 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:47.754661 kubelet[2238]: E1008 19:57:47.754540 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc9285c0b97c6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:40.165114991 +0000 UTC m=+0.463268848,LastTimestamp:2024-10-08 19:57:40.165114991 +0000 UTC m=+0.463268848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:47.813843 kubelet[2238]: E1008 19:57:47.813780 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:47.914426 kubelet[2238]: E1008 19:57:47.914375 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.015346 kubelet[2238]: E1008 19:57:48.015214 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.116140 kubelet[2238]: E1008 19:57:48.116099 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.190600 kubelet[2238]: E1008 19:57:48.190385 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc9285c0b98c48 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:57:40.165119048 +0000 UTC m=+0.463272905,LastTimestamp:2024-10-08 19:57:40.165119048 +0000 UTC m=+0.463272905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:57:48.216479 kubelet[2238]: E1008 19:57:48.216419 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.317186 kubelet[2238]: E1008 19:57:48.317124 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.417773 kubelet[2238]: E1008 19:57:48.417716 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.518746 kubelet[2238]: E1008 19:57:48.518691 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.619108 kubelet[2238]: E1008 19:57:48.618811 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.719718 kubelet[2238]: E1008 19:57:48.719659 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.820164 kubelet[2238]: E1008 19:57:48.820115 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:48.924160 kubelet[2238]: E1008 19:57:48.921142 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.022292 kubelet[2238]: E1008 19:57:49.022235 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.123344 kubelet[2238]: E1008 19:57:49.123276 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.223578 kubelet[2238]: E1008 19:57:49.223446 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.323822 kubelet[2238]: E1008 19:57:49.323772 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.333690 kubelet[2238]: E1008 19:57:49.333671 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:49.417777 kubelet[2238]: E1008 19:57:49.417728 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:49.424587 kubelet[2238]: E1008 19:57:49.424540 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.525864 kubelet[2238]: E1008 19:57:49.525704 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.626875 kubelet[2238]: E1008 19:57:49.626817 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.726986 kubelet[2238]: E1008 19:57:49.726916 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.827375 kubelet[2238]: E1008 19:57:49.827310 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:49.927988 kubelet[2238]: E1008 19:57:49.927928 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.028724 kubelet[2238]: E1008 19:57:50.028669 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.129123 kubelet[2238]: E1008 19:57:50.128971 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.229459 kubelet[2238]: E1008 19:57:50.229411 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.329621 kubelet[2238]: E1008 19:57:50.329549 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.430201 kubelet[2238]: E1008 19:57:50.430077 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:50.530777 kubelet[2238]: E1008 19:57:50.530713 2238 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:57:51.132762 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-7.scope)... Oct 8 19:57:51.132780 systemd[1]: Reloading... Oct 8 19:57:51.142546 kubelet[2238]: I1008 19:57:51.142513 2238 apiserver.go:52] "Watching apiserver" Oct 8 19:57:51.147690 kubelet[2238]: I1008 19:57:51.147618 2238 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:57:51.209122 zram_generator::config[2580]: No configuration found. Oct 8 19:57:51.319197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:57:51.410423 systemd[1]: Reloading finished in 277 ms. Oct 8 19:57:51.457716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:51.472933 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:57:51.473236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:51.473291 systemd[1]: kubelet.service: Consumed 1.039s CPU time, 116.1M memory peak, 0B memory swap peak. Oct 8 19:57:51.486586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:57:51.645303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:57:51.651004 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:57:51.705704 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:51.705704 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:57:51.705704 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:57:51.706852 kubelet[2622]: I1008 19:57:51.706536 2622 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:57:51.712250 kubelet[2622]: I1008 19:57:51.712207 2622 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:57:51.712250 kubelet[2622]: I1008 19:57:51.712240 2622 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:57:51.712510 kubelet[2622]: I1008 19:57:51.712485 2622 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:57:51.714944 kubelet[2622]: I1008 19:57:51.714908 2622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:57:51.716068 kubelet[2622]: I1008 19:57:51.716002 2622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:57:51.723687 kubelet[2622]: I1008 19:57:51.723652 2622 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:57:51.723988 kubelet[2622]: I1008 19:57:51.723869 2622 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:57:51.724125 kubelet[2622]: I1008 19:57:51.723899 2622 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:57:51.724125 kubelet[2622]: I1008 19:57:51.724114 2622 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:57:51.724125 kubelet[2622]: I1008 19:57:51.724127 2622 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:57:51.725146 kubelet[2622]: I1008 19:57:51.725078 2622 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:51.725352 kubelet[2622]: I1008 19:57:51.725204 2622 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:57:51.725352 kubelet[2622]: I1008 19:57:51.725217 2622 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:57:51.725352 kubelet[2622]: I1008 19:57:51.725236 2622 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:57:51.725456 kubelet[2622]: I1008 19:57:51.725389 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:57:51.726535 kubelet[2622]: I1008 19:57:51.726439 2622 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:57:51.727191 kubelet[2622]: I1008 19:57:51.726727 2622 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:57:51.728372 kubelet[2622]: I1008 19:57:51.728342 2622 server.go:1264] "Started kubelet" Oct 8 19:57:51.732723 kubelet[2622]: I1008 19:57:51.730034 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:57:51.733387 kubelet[2622]: I1008 19:57:51.733318 2622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:57:51.734582 kubelet[2622]: I1008 19:57:51.734504 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:57:51.735227 kubelet[2622]: I1008 19:57:51.734883 2622 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:57:51.735537 kubelet[2622]: I1008 19:57:51.735510 2622 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:57:51.736663 kubelet[2622]: I1008 19:57:51.736633 2622 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:57:51.736821 kubelet[2622]: I1008 19:57:51.736787 2622 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:57:51.737752 kubelet[2622]: I1008 19:57:51.737719 2622 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:57:51.741205 kubelet[2622]: I1008 19:57:51.741167 2622 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:57:51.741410 kubelet[2622]: I1008 19:57:51.741369 2622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:57:51.744855 kubelet[2622]: I1008 19:57:51.743270 2622 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:57:51.745195 kubelet[2622]: E1008 19:57:51.745153 2622 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:57:51.746787 kubelet[2622]: I1008 19:57:51.746746 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:57:51.748789 kubelet[2622]: I1008 19:57:51.748753 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:57:51.748853 kubelet[2622]: I1008 19:57:51.748805 2622 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:57:51.748853 kubelet[2622]: I1008 19:57:51.748828 2622 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:57:51.748915 kubelet[2622]: E1008 19:57:51.748879 2622 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:57:51.784846 kubelet[2622]: I1008 19:57:51.784816 2622 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:57:51.784846 kubelet[2622]: I1008 19:57:51.784835 2622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:57:51.784846 kubelet[2622]: I1008 19:57:51.784855 2622 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:57:51.785075 kubelet[2622]: I1008 19:57:51.785008 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:57:51.785075 kubelet[2622]: I1008 19:57:51.785021 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:57:51.785075 kubelet[2622]: I1008 19:57:51.785043 2622 policy_none.go:49] "None policy: Start" Oct 8 19:57:51.785892 kubelet[2622]: I1008 19:57:51.785542 2622 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:57:51.785892 kubelet[2622]: I1008 19:57:51.785579 2622 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:57:51.785892 kubelet[2622]: I1008 19:57:51.785782 2622 state_mem.go:75] "Updated machine memory state" Oct 8 19:57:51.790184 kubelet[2622]: I1008 19:57:51.790168 2622 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:57:51.790601 kubelet[2622]: I1008 19:57:51.790392 2622 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:57:51.790601 kubelet[2622]: I1008 19:57:51.790491 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:57:51.841915 kubelet[2622]: I1008 19:57:51.841887 2622 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:57:51.849064 kubelet[2622]: I1008 19:57:51.849005 2622 topology_manager.go:215] "Topology Admit Handler" podUID="6c8e0f972be671a2ebac6f9e8395098d" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:57:51.849168 kubelet[2622]: I1008 19:57:51.849115 2622 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:57:51.849213 kubelet[2622]: I1008 19:57:51.849174 2622 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:57:51.851898 kubelet[2622]: I1008 19:57:51.851865 2622 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:57:51.852000 kubelet[2622]: I1008 19:57:51.851985 2622 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:57:52.038879 kubelet[2622]: I1008 19:57:52.038167 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:52.038879 kubelet[2622]: I1008 19:57:52.038215 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:52.038879 kubelet[2622]: I1008 19:57:52.038243 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:52.038879 kubelet[2622]: I1008 19:57:52.038263 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c8e0f972be671a2ebac6f9e8395098d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c8e0f972be671a2ebac6f9e8395098d\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:57:52.038879 kubelet[2622]: I1008 19:57:52.038307 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:52.039148 kubelet[2622]: I1008 19:57:52.038323 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:52.039148 kubelet[2622]: I1008 19:57:52.038339 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:52.039148 kubelet[2622]: I1008 19:57:52.038420 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:57:52.039148 kubelet[2622]: I1008 19:57:52.038485 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:57:52.160899 kubelet[2622]: E1008 19:57:52.160847 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.161314 kubelet[2622]: E1008 19:57:52.161249 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.161666 kubelet[2622]: E1008 19:57:52.161629 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.162778 sudo[2655]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:57:52.163220 sudo[2655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 19:57:52.637731 sudo[2655]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:52.726273 kubelet[2622]: I1008 19:57:52.726217 2622 apiserver.go:52] "Watching apiserver" Oct 8 19:57:52.737701 kubelet[2622]: I1008 19:57:52.737639 2622 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:57:52.763213 kubelet[2622]: E1008 19:57:52.763176 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.764373 kubelet[2622]: E1008 19:57:52.764330 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.764578 kubelet[2622]: E1008 19:57:52.764545 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:52.910845 kubelet[2622]: I1008 19:57:52.910689 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.910659659 podStartE2EDuration="1.910659659s" podCreationTimestamp="2024-10-08 19:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:52.856443805 +0000 UTC m=+1.200756030" watchObservedRunningTime="2024-10-08 19:57:52.910659659 +0000 UTC m=+1.254971884" Oct 8 19:57:52.911008 kubelet[2622]: I1008 19:57:52.910838 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.910831653 podStartE2EDuration="1.910831653s" podCreationTimestamp="2024-10-08 19:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:52.910362597 +0000 UTC m=+1.254674852" watchObservedRunningTime="2024-10-08 19:57:52.910831653 +0000 UTC m=+1.255143899" Oct 8 19:57:52.990208 kubelet[2622]: I1008 19:57:52.990119 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.990094054 podStartE2EDuration="1.990094054s" podCreationTimestamp="2024-10-08 19:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:57:52.989666946 +0000 UTC m=+1.333979191" watchObservedRunningTime="2024-10-08 19:57:52.990094054 +0000 UTC m=+1.334406299" Oct 8 19:57:53.764575 kubelet[2622]: E1008 19:57:53.764519 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:54.172769 sudo[1639]: pam_unix(sudo:session): session closed for user root Oct 8 19:57:54.175879 sshd[1636]: pam_unix(sshd:session): session closed for user core Oct 8 19:57:54.181421 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:42032.service: Deactivated successfully. Oct 8 19:57:54.183800 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:57:54.184032 systemd[1]: session-7.scope: Consumed 5.655s CPU time, 196.2M memory peak, 0B memory swap peak. Oct 8 19:57:54.184707 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:57:54.186474 systemd-logind[1446]: Removed session 7. Oct 8 19:57:56.996734 kubelet[2622]: E1008 19:57:56.996694 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:57.202033 kubelet[2622]: E1008 19:57:57.201999 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:57.771478 kubelet[2622]: E1008 19:57:57.770979 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:57.771478 kubelet[2622]: E1008 19:57:57.771094 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:58.210333 kubelet[2622]: E1008 19:57:58.210277 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:58.771796 kubelet[2622]: E1008 19:57:58.771752 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:57:58.772596 kubelet[2622]: E1008 19:57:58.772546 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:05.295847 kubelet[2622]: I1008 19:58:05.295805 2622 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:58:05.296398 containerd[1460]: time="2024-10-08T19:58:05.296210435Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:58:05.296744 kubelet[2622]: I1008 19:58:05.296415 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:58:06.068416 kubelet[2622]: I1008 19:58:06.068349 2622 topology_manager.go:215] "Topology Admit Handler" podUID="c1c48826-f756-4534-b82d-9ef8e21e0c03" podNamespace="kube-system" podName="kube-proxy-xrk82" Oct 8 19:58:06.072215 kubelet[2622]: I1008 19:58:06.072150 2622 topology_manager.go:215] "Topology Admit Handler" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" podNamespace="kube-system" podName="cilium-4dtzb" Oct 8 19:58:06.079540 systemd[1]: Created slice kubepods-besteffort-podc1c48826_f756_4534_b82d_9ef8e21e0c03.slice - libcontainer container kubepods-besteffort-podc1c48826_f756_4534_b82d_9ef8e21e0c03.slice. Oct 8 19:58:06.085339 kubelet[2622]: W1008 19:58:06.083923 2622 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:58:06.085339 kubelet[2622]: E1008 19:58:06.083979 2622 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:58:06.085339 kubelet[2622]: W1008 19:58:06.083923 2622 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:58:06.085339 kubelet[2622]: E1008 19:58:06.084007 2622 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 8 19:58:06.098156 systemd[1]: Created slice kubepods-burstable-pod69548cae_9431_4eb0_b839_3a9fd62b74de.slice - libcontainer container kubepods-burstable-pod69548cae_9431_4eb0_b839_3a9fd62b74de.slice. Oct 8 19:58:06.120262 kubelet[2622]: I1008 19:58:06.120207 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-xtables-lock\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120262 kubelet[2622]: I1008 19:58:06.120262 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1c48826-f756-4534-b82d-9ef8e21e0c03-kube-proxy\") pod \"kube-proxy-xrk82\" (UID: \"c1c48826-f756-4534-b82d-9ef8e21e0c03\") " pod="kube-system/kube-proxy-xrk82" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120284 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-run\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120319 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cni-path\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120341 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-etc-cni-netd\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120361 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-hubble-tls\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120387 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldkn\" (UniqueName: \"kubernetes.io/projected/c1c48826-f756-4534-b82d-9ef8e21e0c03-kube-api-access-pldkn\") pod \"kube-proxy-xrk82\" (UID: \"c1c48826-f756-4534-b82d-9ef8e21e0c03\") " pod="kube-system/kube-proxy-xrk82" Oct 8 19:58:06.120479 kubelet[2622]: I1008 19:58:06.120409 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-lib-modules\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120429 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-config-path\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120451 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-net\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120476 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-bpf-maps\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120496 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-hostproc\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120518 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1c48826-f756-4534-b82d-9ef8e21e0c03-lib-modules\") pod \"kube-proxy-xrk82\" (UID: \"c1c48826-f756-4534-b82d-9ef8e21e0c03\") " pod="kube-system/kube-proxy-xrk82" Oct 8 19:58:06.120648 kubelet[2622]: I1008 19:58:06.120539 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1c48826-f756-4534-b82d-9ef8e21e0c03-xtables-lock\") pod \"kube-proxy-xrk82\" (UID: \"c1c48826-f756-4534-b82d-9ef8e21e0c03\") " pod="kube-system/kube-proxy-xrk82" Oct 8 19:58:06.120835 kubelet[2622]: I1008 19:58:06.120560 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-cgroup\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120835 kubelet[2622]: I1008 19:58:06.120595 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120835 kubelet[2622]: I1008 19:58:06.120618 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-kernel\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.120835 kubelet[2622]: I1008 19:58:06.120644 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cds\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-kube-api-access-64cds\") pod \"cilium-4dtzb\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " pod="kube-system/cilium-4dtzb" Oct 8 19:58:06.228129 kubelet[2622]: I1008 19:58:06.227595 2622 topology_manager.go:215] "Topology Admit Handler" podUID="a5f5798d-82dd-4643-9741-3d66121cb8b9" podNamespace="kube-system" podName="cilium-operator-599987898-7bq7c" Oct 8 19:58:06.254176 systemd[1]: Created slice kubepods-besteffort-poda5f5798d_82dd_4643_9741_3d66121cb8b9.slice - libcontainer container kubepods-besteffort-poda5f5798d_82dd_4643_9741_3d66121cb8b9.slice. Oct 8 19:58:06.322786 kubelet[2622]: I1008 19:58:06.322636 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5f5798d-82dd-4643-9741-3d66121cb8b9-cilium-config-path\") pod \"cilium-operator-599987898-7bq7c\" (UID: \"a5f5798d-82dd-4643-9741-3d66121cb8b9\") " pod="kube-system/cilium-operator-599987898-7bq7c" Oct 8 19:58:06.322786 kubelet[2622]: I1008 19:58:06.322709 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r6d\" (UniqueName: \"kubernetes.io/projected/a5f5798d-82dd-4643-9741-3d66121cb8b9-kube-api-access-v8r6d\") pod \"cilium-operator-599987898-7bq7c\" (UID: \"a5f5798d-82dd-4643-9741-3d66121cb8b9\") " pod="kube-system/cilium-operator-599987898-7bq7c" Oct 8 19:58:06.394766 kubelet[2622]: E1008 19:58:06.394714 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:06.395574 containerd[1460]: time="2024-10-08T19:58:06.395506317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrk82,Uid:c1c48826-f756-4534-b82d-9ef8e21e0c03,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:06.427808 containerd[1460]: time="2024-10-08T19:58:06.427661454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:06.427808 containerd[1460]: time="2024-10-08T19:58:06.427735202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:06.427808 containerd[1460]: time="2024-10-08T19:58:06.427749940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:06.428172 containerd[1460]: time="2024-10-08T19:58:06.427869405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:06.450213 systemd[1]: Started cri-containerd-98e475ebcf0af75dbbea70700ed80e303e3646e775ffdde69a22dfef648682a0.scope - libcontainer container 98e475ebcf0af75dbbea70700ed80e303e3646e775ffdde69a22dfef648682a0. Oct 8 19:58:06.478619 containerd[1460]: time="2024-10-08T19:58:06.478572252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrk82,Uid:c1c48826-f756-4534-b82d-9ef8e21e0c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"98e475ebcf0af75dbbea70700ed80e303e3646e775ffdde69a22dfef648682a0\"" Oct 8 19:58:06.479388 kubelet[2622]: E1008 19:58:06.479365 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:06.481692 containerd[1460]: time="2024-10-08T19:58:06.481648009Z" level=info msg="CreateContainer within sandbox \"98e475ebcf0af75dbbea70700ed80e303e3646e775ffdde69a22dfef648682a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:58:06.500214 containerd[1460]: time="2024-10-08T19:58:06.500157967Z" level=info msg="CreateContainer within sandbox \"98e475ebcf0af75dbbea70700ed80e303e3646e775ffdde69a22dfef648682a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5fe72bf49838a3ca83a57b6c600dfaabb12ffe541aee12592362983ab6047b7e\"" Oct 8 19:58:06.500959 containerd[1460]: time="2024-10-08T19:58:06.500915543Z" level=info msg="StartContainer for \"5fe72bf49838a3ca83a57b6c600dfaabb12ffe541aee12592362983ab6047b7e\"" Oct 8 19:58:06.534324 systemd[1]: Started cri-containerd-5fe72bf49838a3ca83a57b6c600dfaabb12ffe541aee12592362983ab6047b7e.scope - libcontainer container 5fe72bf49838a3ca83a57b6c600dfaabb12ffe541aee12592362983ab6047b7e. Oct 8 19:58:06.569922 containerd[1460]: time="2024-10-08T19:58:06.569870655Z" level=info msg="StartContainer for \"5fe72bf49838a3ca83a57b6c600dfaabb12ffe541aee12592362983ab6047b7e\" returns successfully" Oct 8 19:58:06.785829 kubelet[2622]: E1008 19:58:06.785435 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.157617 kubelet[2622]: E1008 19:58:07.157512 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.158243 containerd[1460]: time="2024-10-08T19:58:07.158196741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7bq7c,Uid:a5f5798d-82dd-4643-9741-3d66121cb8b9,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:07.184890 containerd[1460]: time="2024-10-08T19:58:07.184681712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:07.185382 containerd[1460]: time="2024-10-08T19:58:07.185137580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:07.185382 containerd[1460]: time="2024-10-08T19:58:07.185169540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.185382 containerd[1460]: time="2024-10-08T19:58:07.185307950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:07.208333 systemd[1]: Started cri-containerd-82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45.scope - libcontainer container 82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45. Oct 8 19:58:07.225230 kubelet[2622]: E1008 19:58:07.225178 2622 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Oct 8 19:58:07.225385 kubelet[2622]: E1008 19:58:07.225305 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets podName:69548cae-9431-4eb0-b839-3a9fd62b74de nodeName:}" failed. No retries permitted until 2024-10-08 19:58:07.725260811 +0000 UTC m=+16.069573036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets") pod "cilium-4dtzb" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de") : failed to sync secret cache: timed out waiting for the condition Oct 8 19:58:07.248297 containerd[1460]: time="2024-10-08T19:58:07.248234318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7bq7c,Uid:a5f5798d-82dd-4643-9741-3d66121cb8b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\"" Oct 8 19:58:07.249013 kubelet[2622]: E1008 19:58:07.248988 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.250240 containerd[1460]: time="2024-10-08T19:58:07.250189706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:58:07.903395 kubelet[2622]: E1008 19:58:07.903350 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:07.904030 containerd[1460]: time="2024-10-08T19:58:07.903967577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dtzb,Uid:69548cae-9431-4eb0-b839-3a9fd62b74de,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:08.454216 containerd[1460]: time="2024-10-08T19:58:08.454103761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:08.454216 containerd[1460]: time="2024-10-08T19:58:08.454188470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:08.454216 containerd[1460]: time="2024-10-08T19:58:08.454204089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:08.454454 containerd[1460]: time="2024-10-08T19:58:08.454342430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:08.477400 systemd[1]: Started cri-containerd-4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a.scope - libcontainer container 4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a. Oct 8 19:58:08.504961 containerd[1460]: time="2024-10-08T19:58:08.504911701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4dtzb,Uid:69548cae-9431-4eb0-b839-3a9fd62b74de,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\"" Oct 8 19:58:08.505794 kubelet[2622]: E1008 19:58:08.505769 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:09.097765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231093113.mount: Deactivated successfully. Oct 8 19:58:10.077326 containerd[1460]: time="2024-10-08T19:58:10.077247181Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:10.113187 containerd[1460]: time="2024-10-08T19:58:10.113101159Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907225" Oct 8 19:58:10.162199 containerd[1460]: time="2024-10-08T19:58:10.162117690Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:10.164081 containerd[1460]: time="2024-10-08T19:58:10.164022411Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.913784534s" Oct 8 19:58:10.164081 containerd[1460]: time="2024-10-08T19:58:10.164070521Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 19:58:10.165793 containerd[1460]: time="2024-10-08T19:58:10.165754817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:58:10.166810 containerd[1460]: time="2024-10-08T19:58:10.166765066Z" level=info msg="CreateContainer within sandbox \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:58:10.438409 containerd[1460]: time="2024-10-08T19:58:10.438260803Z" level=info msg="CreateContainer within sandbox \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\"" Oct 8 19:58:10.439112 containerd[1460]: time="2024-10-08T19:58:10.439067911Z" level=info msg="StartContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\"" Oct 8 19:58:10.473223 systemd[1]: Started cri-containerd-eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd.scope - libcontainer container eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd. Oct 8 19:58:10.540399 containerd[1460]: time="2024-10-08T19:58:10.540345312Z" level=info msg="StartContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" returns successfully" Oct 8 19:58:10.800141 kubelet[2622]: E1008 19:58:10.799403 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:10.826735 kubelet[2622]: I1008 19:58:10.826647 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xrk82" podStartSLOduration=4.826617435 podStartE2EDuration="4.826617435s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:06.802134116 +0000 UTC m=+15.146446371" watchObservedRunningTime="2024-10-08 19:58:10.826617435 +0000 UTC m=+19.170929660" Oct 8 19:58:10.827001 kubelet[2622]: I1008 19:58:10.826955 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7bq7c" podStartSLOduration=1.9115141329999998 podStartE2EDuration="4.826948258s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="2024-10-08 19:58:07.249697891 +0000 UTC m=+15.594010116" lastFinishedPulling="2024-10-08 19:58:10.165132016 +0000 UTC m=+18.509444241" observedRunningTime="2024-10-08 19:58:10.82593923 +0000 UTC m=+19.170251455" watchObservedRunningTime="2024-10-08 19:58:10.826948258 +0000 UTC m=+19.171260473" Oct 8 19:58:11.868821 kubelet[2622]: E1008 19:58:11.868780 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:18.857253 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:33266.service - OpenSSH per-connection server daemon (10.0.0.1:33266). Oct 8 19:58:18.895770 sshd[3039]: Accepted publickey for core from 10.0.0.1 port 33266 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:18.897926 sshd[3039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:18.904158 systemd-logind[1446]: New session 8 of user core. Oct 8 19:58:18.911208 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:58:19.063509 sshd[3039]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:19.069023 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:33266.service: Deactivated successfully. Oct 8 19:58:19.071478 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:58:19.072170 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:58:19.073557 systemd-logind[1446]: Removed session 8. Oct 8 19:58:20.513986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967341250.mount: Deactivated successfully. Oct 8 19:58:24.075038 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:52032.service - OpenSSH per-connection server daemon (10.0.0.1:52032). Oct 8 19:58:24.498768 sshd[3079]: Accepted publickey for core from 10.0.0.1 port 52032 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:24.500674 sshd[3079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:24.503563 containerd[1460]: time="2024-10-08T19:58:24.503503457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:24.504975 containerd[1460]: time="2024-10-08T19:58:24.504922922Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735351" Oct 8 19:58:24.506323 systemd-logind[1446]: New session 9 of user core. Oct 8 19:58:24.507396 containerd[1460]: time="2024-10-08T19:58:24.507340330Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:58:24.509749 containerd[1460]: time="2024-10-08T19:58:24.509706992Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.34391278s" Oct 8 19:58:24.509827 containerd[1460]: time="2024-10-08T19:58:24.509759301Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 19:58:24.513347 containerd[1460]: time="2024-10-08T19:58:24.513302382Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:58:24.514334 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:58:24.529751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099826466.mount: Deactivated successfully. Oct 8 19:58:24.532192 containerd[1460]: time="2024-10-08T19:58:24.532130645Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\"" Oct 8 19:58:24.532908 containerd[1460]: time="2024-10-08T19:58:24.532824257Z" level=info msg="StartContainer for \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\"" Oct 8 19:58:24.567652 systemd[1]: Started cri-containerd-a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00.scope - libcontainer container a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00. Oct 8 19:58:24.621475 systemd[1]: cri-containerd-a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00.scope: Deactivated successfully. Oct 8 19:58:24.688900 sshd[3079]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:24.692766 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:52032.service: Deactivated successfully. Oct 8 19:58:24.694946 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:58:24.695702 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:58:24.696937 systemd-logind[1446]: Removed session 9. Oct 8 19:58:24.709309 containerd[1460]: time="2024-10-08T19:58:24.709248854Z" level=info msg="StartContainer for \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\" returns successfully" Oct 8 19:58:24.894920 kubelet[2622]: E1008 19:58:24.894854 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:25.527110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00-rootfs.mount: Deactivated successfully. Oct 8 19:58:25.562731 containerd[1460]: time="2024-10-08T19:58:25.560576205Z" level=info msg="shim disconnected" id=a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00 namespace=k8s.io Oct 8 19:58:25.562731 containerd[1460]: time="2024-10-08T19:58:25.562712114Z" level=warning msg="cleaning up after shim disconnected" id=a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00 namespace=k8s.io Oct 8 19:58:25.562731 containerd[1460]: time="2024-10-08T19:58:25.562722413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:25.898463 kubelet[2622]: E1008 19:58:25.898415 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:25.901956 containerd[1460]: time="2024-10-08T19:58:25.901861793Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:58:25.924421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070273878.mount: Deactivated successfully. Oct 8 19:58:25.932161 containerd[1460]: time="2024-10-08T19:58:25.932092757Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\"" Oct 8 19:58:25.932934 containerd[1460]: time="2024-10-08T19:58:25.932889684Z" level=info msg="StartContainer for \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\"" Oct 8 19:58:25.969822 systemd[1]: Started cri-containerd-301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba.scope - libcontainer container 301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba. Oct 8 19:58:26.006467 containerd[1460]: time="2024-10-08T19:58:26.005295381Z" level=info msg="StartContainer for \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\" returns successfully" Oct 8 19:58:26.018348 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:58:26.018656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:26.019015 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:26.026452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:58:26.026745 systemd[1]: cri-containerd-301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba.scope: Deactivated successfully. Oct 8 19:58:26.051549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:58:26.056837 containerd[1460]: time="2024-10-08T19:58:26.056746122Z" level=info msg="shim disconnected" id=301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba namespace=k8s.io Oct 8 19:58:26.056837 containerd[1460]: time="2024-10-08T19:58:26.056830861Z" level=warning msg="cleaning up after shim disconnected" id=301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba namespace=k8s.io Oct 8 19:58:26.056837 containerd[1460]: time="2024-10-08T19:58:26.056844085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:26.527811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba-rootfs.mount: Deactivated successfully. Oct 8 19:58:26.902657 kubelet[2622]: E1008 19:58:26.902623 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:26.904622 containerd[1460]: time="2024-10-08T19:58:26.904555127Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:58:26.927601 containerd[1460]: time="2024-10-08T19:58:26.927543264Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\"" Oct 8 19:58:26.928117 containerd[1460]: time="2024-10-08T19:58:26.928086503Z" level=info msg="StartContainer for \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\"" Oct 8 19:58:26.964318 systemd[1]: Started cri-containerd-27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978.scope - libcontainer container 27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978. Oct 8 19:58:26.997819 systemd[1]: cri-containerd-27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978.scope: Deactivated successfully. Oct 8 19:58:27.025283 containerd[1460]: time="2024-10-08T19:58:27.025236025Z" level=info msg="StartContainer for \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\" returns successfully" Oct 8 19:58:27.052189 containerd[1460]: time="2024-10-08T19:58:27.052110323Z" level=info msg="shim disconnected" id=27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978 namespace=k8s.io Oct 8 19:58:27.052189 containerd[1460]: time="2024-10-08T19:58:27.052186216Z" level=warning msg="cleaning up after shim disconnected" id=27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978 namespace=k8s.io Oct 8 19:58:27.052189 containerd[1460]: time="2024-10-08T19:58:27.052197016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:27.527138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978-rootfs.mount: Deactivated successfully. Oct 8 19:58:27.907098 kubelet[2622]: E1008 19:58:27.906904 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:27.909164 containerd[1460]: time="2024-10-08T19:58:27.909112079Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:58:28.320234 containerd[1460]: time="2024-10-08T19:58:28.320164397Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\"" Oct 8 19:58:28.320843 containerd[1460]: time="2024-10-08T19:58:28.320813185Z" level=info msg="StartContainer for \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\"" Oct 8 19:58:28.354334 systemd[1]: Started cri-containerd-a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11.scope - libcontainer container a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11. Oct 8 19:58:28.382827 systemd[1]: cri-containerd-a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11.scope: Deactivated successfully. Oct 8 19:58:28.386694 containerd[1460]: time="2024-10-08T19:58:28.386647650Z" level=info msg="StartContainer for \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\" returns successfully" Oct 8 19:58:28.415714 containerd[1460]: time="2024-10-08T19:58:28.415629942Z" level=info msg="shim disconnected" id=a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11 namespace=k8s.io Oct 8 19:58:28.415714 containerd[1460]: time="2024-10-08T19:58:28.415705754Z" level=warning msg="cleaning up after shim disconnected" id=a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11 namespace=k8s.io Oct 8 19:58:28.415714 containerd[1460]: time="2024-10-08T19:58:28.415720632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:58:28.527296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11-rootfs.mount: Deactivated successfully. Oct 8 19:58:28.910467 kubelet[2622]: E1008 19:58:28.910435 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:28.913423 containerd[1460]: time="2024-10-08T19:58:28.913238017Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:58:29.351331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087863774.mount: Deactivated successfully. Oct 8 19:58:29.390846 containerd[1460]: time="2024-10-08T19:58:29.390695550Z" level=info msg="CreateContainer within sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\"" Oct 8 19:58:29.391607 containerd[1460]: time="2024-10-08T19:58:29.391547158Z" level=info msg="StartContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\"" Oct 8 19:58:29.424423 systemd[1]: Started cri-containerd-571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63.scope - libcontainer container 571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63. Oct 8 19:58:29.554080 containerd[1460]: time="2024-10-08T19:58:29.553991200Z" level=info msg="StartContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" returns successfully" Oct 8 19:58:29.674866 kubelet[2622]: I1008 19:58:29.674744 2622 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:58:29.717343 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:52048.service - OpenSSH per-connection server daemon (10.0.0.1:52048). Oct 8 19:58:29.789133 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:29.790738 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:29.797290 systemd-logind[1446]: New session 10 of user core. Oct 8 19:58:29.803213 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:58:29.918340 kubelet[2622]: E1008 19:58:29.918301 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.009057 sshd[3418]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:30.013545 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:52048.service: Deactivated successfully. Oct 8 19:58:30.016179 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:58:30.016980 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:58:30.018958 kubelet[2622]: I1008 19:58:30.018893 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4dtzb" podStartSLOduration=8.014304272 podStartE2EDuration="24.018868546s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="2024-10-08 19:58:08.506360755 +0000 UTC m=+16.850672980" lastFinishedPulling="2024-10-08 19:58:24.510925029 +0000 UTC m=+32.855237254" observedRunningTime="2024-10-08 19:58:30.003502179 +0000 UTC m=+38.347814414" watchObservedRunningTime="2024-10-08 19:58:30.018868546 +0000 UTC m=+38.363180781" Oct 8 19:58:30.019238 systemd-logind[1446]: Removed session 10. Oct 8 19:58:30.019985 kubelet[2622]: I1008 19:58:30.019526 2622 topology_manager.go:215] "Topology Admit Handler" podUID="11140882-5f82-483c-885f-3f12ef478213" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cgbc4" Oct 8 19:58:30.032168 systemd[1]: Created slice kubepods-burstable-pod11140882_5f82_483c_885f_3f12ef478213.slice - libcontainer container kubepods-burstable-pod11140882_5f82_483c_885f_3f12ef478213.slice. Oct 8 19:58:30.033480 kubelet[2622]: I1008 19:58:30.033439 2622 topology_manager.go:215] "Topology Admit Handler" podUID="e88cdf17-12cc-408b-b6c1-eb80202293d0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kjz82" Oct 8 19:58:30.041575 systemd[1]: Created slice kubepods-burstable-pode88cdf17_12cc_408b_b6c1_eb80202293d0.slice - libcontainer container kubepods-burstable-pode88cdf17_12cc_408b_b6c1_eb80202293d0.slice. Oct 8 19:58:30.179161 kubelet[2622]: I1008 19:58:30.179092 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e88cdf17-12cc-408b-b6c1-eb80202293d0-config-volume\") pod \"coredns-7db6d8ff4d-kjz82\" (UID: \"e88cdf17-12cc-408b-b6c1-eb80202293d0\") " pod="kube-system/coredns-7db6d8ff4d-kjz82" Oct 8 19:58:30.179161 kubelet[2622]: I1008 19:58:30.179154 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11140882-5f82-483c-885f-3f12ef478213-config-volume\") pod \"coredns-7db6d8ff4d-cgbc4\" (UID: \"11140882-5f82-483c-885f-3f12ef478213\") " pod="kube-system/coredns-7db6d8ff4d-cgbc4" Oct 8 19:58:30.179368 kubelet[2622]: I1008 19:58:30.179182 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn2ph\" (UniqueName: \"kubernetes.io/projected/e88cdf17-12cc-408b-b6c1-eb80202293d0-kube-api-access-fn2ph\") pod \"coredns-7db6d8ff4d-kjz82\" (UID: \"e88cdf17-12cc-408b-b6c1-eb80202293d0\") " pod="kube-system/coredns-7db6d8ff4d-kjz82" Oct 8 19:58:30.179368 kubelet[2622]: I1008 19:58:30.179213 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4j5z\" (UniqueName: \"kubernetes.io/projected/11140882-5f82-483c-885f-3f12ef478213-kube-api-access-g4j5z\") pod \"coredns-7db6d8ff4d-cgbc4\" (UID: \"11140882-5f82-483c-885f-3f12ef478213\") " pod="kube-system/coredns-7db6d8ff4d-cgbc4" Oct 8 19:58:30.337961 kubelet[2622]: E1008 19:58:30.337899 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.338890 containerd[1460]: time="2024-10-08T19:58:30.338838995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgbc4,Uid:11140882-5f82-483c-885f-3f12ef478213,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:30.346429 kubelet[2622]: E1008 19:58:30.346377 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:30.347032 containerd[1460]: time="2024-10-08T19:58:30.346971025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kjz82,Uid:e88cdf17-12cc-408b-b6c1-eb80202293d0,Namespace:kube-system,Attempt:0,}" Oct 8 19:58:30.921514 kubelet[2622]: E1008 19:58:30.921469 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:31.725088 systemd-networkd[1387]: cilium_host: Link UP Oct 8 19:58:31.725312 systemd-networkd[1387]: cilium_net: Link UP Oct 8 19:58:31.725318 systemd-networkd[1387]: cilium_net: Gained carrier Oct 8 19:58:31.725629 systemd-networkd[1387]: cilium_host: Gained carrier Oct 8 19:58:31.725884 systemd-networkd[1387]: cilium_host: Gained IPv6LL Oct 8 19:58:31.829489 systemd-networkd[1387]: cilium_vxlan: Link UP Oct 8 19:58:31.829498 systemd-networkd[1387]: cilium_vxlan: Gained carrier Oct 8 19:58:31.922566 kubelet[2622]: E1008 19:58:31.922516 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:31.976196 systemd-networkd[1387]: cilium_net: Gained IPv6LL Oct 8 19:58:32.064088 kernel: NET: Registered PF_ALG protocol family Oct 8 19:58:32.771726 systemd-networkd[1387]: lxc_health: Link UP Oct 8 19:58:32.780942 systemd-networkd[1387]: lxc_health: Gained carrier Oct 8 19:58:33.023212 systemd-networkd[1387]: lxca76514ef1f5c: Link UP Oct 8 19:58:33.039247 kernel: eth0: renamed from tmp18f32 Oct 8 19:58:33.039962 systemd-networkd[1387]: lxcd6cd95c5706d: Link UP Oct 8 19:58:33.054218 systemd-networkd[1387]: lxca76514ef1f5c: Gained carrier Oct 8 19:58:33.056086 kernel: eth0: renamed from tmp3106a Oct 8 19:58:33.061304 systemd-networkd[1387]: lxcd6cd95c5706d: Gained carrier Oct 8 19:58:33.432269 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Oct 8 19:58:33.907752 kubelet[2622]: E1008 19:58:33.907651 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:33.926182 kubelet[2622]: E1008 19:58:33.925868 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:33.946321 systemd-networkd[1387]: lxc_health: Gained IPv6LL Oct 8 19:58:34.521191 systemd-networkd[1387]: lxca76514ef1f5c: Gained IPv6LL Oct 8 19:58:34.905241 systemd-networkd[1387]: lxcd6cd95c5706d: Gained IPv6LL Oct 8 19:58:34.927730 kubelet[2622]: E1008 19:58:34.927689 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:35.036678 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:56794.service - OpenSSH per-connection server daemon (10.0.0.1:56794). Oct 8 19:58:35.073684 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 56794 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:35.075983 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:35.080475 systemd-logind[1446]: New session 11 of user core. Oct 8 19:58:35.091376 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:58:35.202562 sshd[3873]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:35.207251 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:56794.service: Deactivated successfully. Oct 8 19:58:35.209945 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:58:35.210623 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:58:35.211517 systemd-logind[1446]: Removed session 11. Oct 8 19:58:37.104080 containerd[1460]: time="2024-10-08T19:58:37.103917672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:37.104080 containerd[1460]: time="2024-10-08T19:58:37.104008017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:37.104080 containerd[1460]: time="2024-10-08T19:58:37.104024950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.104706 containerd[1460]: time="2024-10-08T19:58:37.104142438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.119040 systemd[1]: run-containerd-runc-k8s.io-18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78-runc.RnbVai.mount: Deactivated successfully. Oct 8 19:58:37.129179 systemd[1]: Started cri-containerd-18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78.scope - libcontainer container 18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78. Oct 8 19:58:37.140531 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:58:37.169420 containerd[1460]: time="2024-10-08T19:58:37.169375662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cgbc4,Uid:11140882-5f82-483c-885f-3f12ef478213,Namespace:kube-system,Attempt:0,} returns sandbox id \"18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78\"" Oct 8 19:58:37.170474 kubelet[2622]: E1008 19:58:37.170448 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.173094 containerd[1460]: time="2024-10-08T19:58:37.173064814Z" level=info msg="CreateContainer within sandbox \"18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:58:37.218810 containerd[1460]: time="2024-10-08T19:58:37.218698668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:58:37.218810 containerd[1460]: time="2024-10-08T19:58:37.218766008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:58:37.218810 containerd[1460]: time="2024-10-08T19:58:37.218781588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.219104 containerd[1460]: time="2024-10-08T19:58:37.218885871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:58:37.246192 systemd[1]: Started cri-containerd-3106a1172cdd6083a4205cec2e4a1a62737e7c71d4b0d8bf1d730ebf1d7e5a3b.scope - libcontainer container 3106a1172cdd6083a4205cec2e4a1a62737e7c71d4b0d8bf1d730ebf1d7e5a3b. Oct 8 19:58:37.257570 systemd-resolved[1340]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:58:37.282445 containerd[1460]: time="2024-10-08T19:58:37.282404090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kjz82,Uid:e88cdf17-12cc-408b-b6c1-eb80202293d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3106a1172cdd6083a4205cec2e4a1a62737e7c71d4b0d8bf1d730ebf1d7e5a3b\"" Oct 8 19:58:37.283621 kubelet[2622]: E1008 19:58:37.283344 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.286215 containerd[1460]: time="2024-10-08T19:58:37.286163238Z" level=info msg="CreateContainer within sandbox \"3106a1172cdd6083a4205cec2e4a1a62737e7c71d4b0d8bf1d730ebf1d7e5a3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:58:37.729411 containerd[1460]: time="2024-10-08T19:58:37.729346115Z" level=info msg="CreateContainer within sandbox \"3106a1172cdd6083a4205cec2e4a1a62737e7c71d4b0d8bf1d730ebf1d7e5a3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1dbfb71ba9cf009f822f8afc253a713dc93e9abd4d6372a0eac85c244093117\"" Oct 8 19:58:37.729938 containerd[1460]: time="2024-10-08T19:58:37.729913585Z" level=info msg="StartContainer for \"a1dbfb71ba9cf009f822f8afc253a713dc93e9abd4d6372a0eac85c244093117\"" Oct 8 19:58:37.731713 containerd[1460]: time="2024-10-08T19:58:37.731642397Z" level=info msg="CreateContainer within sandbox \"18f320910cd308304f7e0c4e8b2385d0cc3a08603d190248c60c31607c4e4b78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b522d010c247411b7f0dffc9f70ffdc8940ade942878e06e9141b29a0778b2ae\"" Oct 8 19:58:37.732293 containerd[1460]: time="2024-10-08T19:58:37.732257059Z" level=info msg="StartContainer for \"b522d010c247411b7f0dffc9f70ffdc8940ade942878e06e9141b29a0778b2ae\"" Oct 8 19:58:37.757324 systemd[1]: Started cri-containerd-a1dbfb71ba9cf009f822f8afc253a713dc93e9abd4d6372a0eac85c244093117.scope - libcontainer container a1dbfb71ba9cf009f822f8afc253a713dc93e9abd4d6372a0eac85c244093117. Oct 8 19:58:37.760820 systemd[1]: Started cri-containerd-b522d010c247411b7f0dffc9f70ffdc8940ade942878e06e9141b29a0778b2ae.scope - libcontainer container b522d010c247411b7f0dffc9f70ffdc8940ade942878e06e9141b29a0778b2ae. Oct 8 19:58:37.796290 containerd[1460]: time="2024-10-08T19:58:37.796232484Z" level=info msg="StartContainer for \"a1dbfb71ba9cf009f822f8afc253a713dc93e9abd4d6372a0eac85c244093117\" returns successfully" Oct 8 19:58:37.796449 containerd[1460]: time="2024-10-08T19:58:37.796254927Z" level=info msg="StartContainer for \"b522d010c247411b7f0dffc9f70ffdc8940ade942878e06e9141b29a0778b2ae\" returns successfully" Oct 8 19:58:37.938322 kubelet[2622]: E1008 19:58:37.938274 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.939138 kubelet[2622]: E1008 19:58:37.938810 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:37.949524 kubelet[2622]: I1008 19:58:37.949449 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cgbc4" podStartSLOduration=31.949435632 podStartE2EDuration="31.949435632s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:37.949157443 +0000 UTC m=+46.293469668" watchObservedRunningTime="2024-10-08 19:58:37.949435632 +0000 UTC m=+46.293747858" Oct 8 19:58:37.975821 kubelet[2622]: I1008 19:58:37.975752 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kjz82" podStartSLOduration=31.975728018 podStartE2EDuration="31.975728018s" podCreationTimestamp="2024-10-08 19:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:58:37.961740229 +0000 UTC m=+46.306052474" watchObservedRunningTime="2024-10-08 19:58:37.975728018 +0000 UTC m=+46.320040243" Oct 8 19:58:38.940465 kubelet[2622]: E1008 19:58:38.940426 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:38.940925 kubelet[2622]: E1008 19:58:38.940604 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:39.941626 kubelet[2622]: E1008 19:58:39.941589 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:39.941626 kubelet[2622]: E1008 19:58:39.941602 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:58:40.215656 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:56800.service - OpenSSH per-connection server daemon (10.0.0.1:56800). Oct 8 19:58:40.258085 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 56800 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:40.260252 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:40.264865 systemd-logind[1446]: New session 12 of user core. Oct 8 19:58:40.274271 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:58:40.411475 sshd[4063]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:40.415547 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:56800.service: Deactivated successfully. Oct 8 19:58:40.417782 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:58:40.418486 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:58:40.419676 systemd-logind[1446]: Removed session 12. Oct 8 19:58:45.423946 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:45498.service - OpenSSH per-connection server daemon (10.0.0.1:45498). Oct 8 19:58:45.462715 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 45498 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:45.464534 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.469305 systemd-logind[1446]: New session 13 of user core. Oct 8 19:58:45.475254 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:58:45.594071 sshd[4081]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:45.604788 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:45498.service: Deactivated successfully. Oct 8 19:58:45.606836 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:58:45.608383 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:58:45.613539 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:45504.service - OpenSSH per-connection server daemon (10.0.0.1:45504). Oct 8 19:58:45.614641 systemd-logind[1446]: Removed session 13. Oct 8 19:58:45.652869 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 45504 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:45.654465 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.658767 systemd-logind[1446]: New session 14 of user core. Oct 8 19:58:45.663175 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:58:45.843032 sshd[4096]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:45.852240 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:45504.service: Deactivated successfully. Oct 8 19:58:45.856233 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:58:45.858197 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:58:45.867951 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:45506.service - OpenSSH per-connection server daemon (10.0.0.1:45506). Oct 8 19:58:45.869221 systemd-logind[1446]: Removed session 14. Oct 8 19:58:45.902886 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 45506 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:45.904331 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:45.908114 systemd-logind[1446]: New session 15 of user core. Oct 8 19:58:45.920179 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:58:46.037405 sshd[4108]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:46.042543 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:45506.service: Deactivated successfully. Oct 8 19:58:46.044968 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:58:46.045862 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:58:46.046869 systemd-logind[1446]: Removed session 15. Oct 8 19:58:51.049685 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:48634.service - OpenSSH per-connection server daemon (10.0.0.1:48634). Oct 8 19:58:51.091422 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:51.093358 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:51.098212 systemd-logind[1446]: New session 16 of user core. Oct 8 19:58:51.107279 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:58:51.222998 sshd[4123]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:51.227552 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:48634.service: Deactivated successfully. Oct 8 19:58:51.229548 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:58:51.230352 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:58:51.231386 systemd-logind[1446]: Removed session 16. Oct 8 19:58:56.234935 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:48638.service - OpenSSH per-connection server daemon (10.0.0.1:48638). Oct 8 19:58:56.272076 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 48638 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:56.273748 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:56.277761 systemd-logind[1446]: New session 17 of user core. Oct 8 19:58:56.290245 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:58:56.403646 sshd[4139]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:56.414164 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:48638.service: Deactivated successfully. Oct 8 19:58:56.415901 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:58:56.417732 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:58:56.424327 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:48648.service - OpenSSH per-connection server daemon (10.0.0.1:48648). Oct 8 19:58:56.425262 systemd-logind[1446]: Removed session 17. Oct 8 19:58:56.464623 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 48648 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:56.466457 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:56.470606 systemd-logind[1446]: New session 18 of user core. Oct 8 19:58:56.481179 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:58:56.893372 sshd[4153]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:56.907323 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:48648.service: Deactivated successfully. Oct 8 19:58:56.909376 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:58:56.911520 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:58:56.918423 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:48662.service - OpenSSH per-connection server daemon (10.0.0.1:48662). Oct 8 19:58:56.919492 systemd-logind[1446]: Removed session 18. Oct 8 19:58:56.959555 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 48662 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:56.961761 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:56.966230 systemd-logind[1446]: New session 19 of user core. Oct 8 19:58:56.975178 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:58:58.539544 sshd[4165]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:58.548090 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:48662.service: Deactivated successfully. Oct 8 19:58:58.552001 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:58:58.556830 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:58:58.571501 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:48666.service - OpenSSH per-connection server daemon (10.0.0.1:48666). Oct 8 19:58:58.581377 systemd-logind[1446]: Removed session 19. Oct 8 19:58:58.619033 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 48666 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:58.621551 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:58.626970 systemd-logind[1446]: New session 20 of user core. Oct 8 19:58:58.643297 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:58:58.880118 sshd[4189]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:58.888337 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:48666.service: Deactivated successfully. Oct 8 19:58:58.890562 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:58:58.893502 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:58:58.901523 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:48682.service - OpenSSH per-connection server daemon (10.0.0.1:48682). Oct 8 19:58:58.903601 systemd-logind[1446]: Removed session 20. Oct 8 19:58:58.936713 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 48682 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:58:58.938396 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:58:58.942828 systemd-logind[1446]: New session 21 of user core. Oct 8 19:58:58.957203 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:58:59.084969 sshd[4201]: pam_unix(sshd:session): session closed for user core Oct 8 19:58:59.089350 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:48682.service: Deactivated successfully. Oct 8 19:58:59.091988 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:58:59.092745 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:58:59.093745 systemd-logind[1446]: Removed session 21. Oct 8 19:59:02.749908 kubelet[2622]: E1008 19:59:02.749857 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:04.096824 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:37052.service - OpenSSH per-connection server daemon (10.0.0.1:37052). Oct 8 19:59:04.134995 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 37052 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:04.136696 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:04.140471 systemd-logind[1446]: New session 22 of user core. Oct 8 19:59:04.150187 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:59:04.275956 sshd[4216]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:04.281143 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:37052.service: Deactivated successfully. Oct 8 19:59:04.283850 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:59:04.284557 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:59:04.285758 systemd-logind[1446]: Removed session 22. Oct 8 19:59:06.749894 kubelet[2622]: E1008 19:59:06.749829 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:08.749701 kubelet[2622]: E1008 19:59:08.749638 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:09.291330 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:37060.service - OpenSSH per-connection server daemon (10.0.0.1:37060). Oct 8 19:59:09.332624 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 37060 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:09.334459 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:09.338625 systemd-logind[1446]: New session 23 of user core. Oct 8 19:59:09.348226 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:59:09.468496 sshd[4236]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:09.473675 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:37060.service: Deactivated successfully. Oct 8 19:59:09.475996 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:59:09.476655 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:59:09.477546 systemd-logind[1446]: Removed session 23. Oct 8 19:59:14.489885 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:37834.service - OpenSSH per-connection server daemon (10.0.0.1:37834). Oct 8 19:59:14.531893 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:14.533972 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:14.538400 systemd-logind[1446]: New session 24 of user core. Oct 8 19:59:14.548195 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:59:14.657512 sshd[4250]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:14.661542 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:37834.service: Deactivated successfully. Oct 8 19:59:14.663437 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:59:14.664140 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:59:14.665105 systemd-logind[1446]: Removed session 24. Oct 8 19:59:17.750434 kubelet[2622]: E1008 19:59:17.750371 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:19.669243 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:37842.service - OpenSSH per-connection server daemon (10.0.0.1:37842). Oct 8 19:59:19.710798 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 37842 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:19.712558 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:19.716671 systemd-logind[1446]: New session 25 of user core. Oct 8 19:59:19.731319 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:59:19.750092 kubelet[2622]: E1008 19:59:19.750024 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:19.837727 sshd[4264]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:19.852396 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:37842.service: Deactivated successfully. Oct 8 19:59:19.854134 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:59:19.855901 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:59:19.867677 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Oct 8 19:59:19.868824 systemd-logind[1446]: Removed session 25. Oct 8 19:59:19.901508 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:19.903414 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:19.908469 systemd-logind[1446]: New session 26 of user core. Oct 8 19:59:19.919338 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:59:21.353935 containerd[1460]: time="2024-10-08T19:59:21.353849066Z" level=info msg="StopContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" with timeout 30 (s)" Oct 8 19:59:21.355135 containerd[1460]: time="2024-10-08T19:59:21.355086012Z" level=info msg="Stop container \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" with signal terminated" Oct 8 19:59:21.397948 systemd[1]: run-containerd-runc-k8s.io-571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63-runc.xD1d8n.mount: Deactivated successfully. Oct 8 19:59:21.401241 systemd[1]: cri-containerd-eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd.scope: Deactivated successfully. Oct 8 19:59:21.420661 containerd[1460]: time="2024-10-08T19:59:21.420598667Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:59:21.426357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd-rootfs.mount: Deactivated successfully. Oct 8 19:59:21.431257 containerd[1460]: time="2024-10-08T19:59:21.431211224Z" level=info msg="StopContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" with timeout 2 (s)" Oct 8 19:59:21.431601 containerd[1460]: time="2024-10-08T19:59:21.431581216Z" level=info msg="Stop container \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" with signal terminated" Oct 8 19:59:21.433750 containerd[1460]: time="2024-10-08T19:59:21.433691016Z" level=info msg="shim disconnected" id=eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd namespace=k8s.io Oct 8 19:59:21.433750 containerd[1460]: time="2024-10-08T19:59:21.433740861Z" level=warning msg="cleaning up after shim disconnected" id=eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd namespace=k8s.io Oct 8 19:59:21.433750 containerd[1460]: time="2024-10-08T19:59:21.433752153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:21.438700 systemd-networkd[1387]: lxc_health: Link DOWN Oct 8 19:59:21.438724 systemd-networkd[1387]: lxc_health: Lost carrier Oct 8 19:59:21.456764 containerd[1460]: time="2024-10-08T19:59:21.456695214Z" level=info msg="StopContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" returns successfully" Oct 8 19:59:21.458274 containerd[1460]: time="2024-10-08T19:59:21.458232199Z" level=info msg="StopPodSandbox for \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\"" Oct 8 19:59:21.463590 containerd[1460]: time="2024-10-08T19:59:21.463527972Z" level=info msg="Container to stop \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.466315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45-shm.mount: Deactivated successfully. Oct 8 19:59:21.468536 systemd[1]: cri-containerd-571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63.scope: Deactivated successfully. Oct 8 19:59:21.468853 systemd[1]: cri-containerd-571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63.scope: Consumed 7.178s CPU time. Oct 8 19:59:21.473981 systemd[1]: cri-containerd-82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45.scope: Deactivated successfully. Oct 8 19:59:21.495797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63-rootfs.mount: Deactivated successfully. Oct 8 19:59:21.501477 containerd[1460]: time="2024-10-08T19:59:21.501377584Z" level=info msg="shim disconnected" id=571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63 namespace=k8s.io Oct 8 19:59:21.501477 containerd[1460]: time="2024-10-08T19:59:21.501459610Z" level=warning msg="cleaning up after shim disconnected" id=571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63 namespace=k8s.io Oct 8 19:59:21.501477 containerd[1460]: time="2024-10-08T19:59:21.501471592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:21.507852 containerd[1460]: time="2024-10-08T19:59:21.507785367Z" level=info msg="shim disconnected" id=82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45 namespace=k8s.io Oct 8 19:59:21.508324 containerd[1460]: time="2024-10-08T19:59:21.508147794Z" level=warning msg="cleaning up after shim disconnected" id=82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45 namespace=k8s.io Oct 8 19:59:21.508324 containerd[1460]: time="2024-10-08T19:59:21.508163654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:21.524911 containerd[1460]: time="2024-10-08T19:59:21.524837826Z" level=info msg="StopContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" returns successfully" Oct 8 19:59:21.525635 containerd[1460]: time="2024-10-08T19:59:21.525588089Z" level=info msg="StopPodSandbox for \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\"" Oct 8 19:59:21.525635 containerd[1460]: time="2024-10-08T19:59:21.525636772Z" level=info msg="Container to stop \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.525803 containerd[1460]: time="2024-10-08T19:59:21.525653264Z" level=info msg="Container to stop \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.525803 containerd[1460]: time="2024-10-08T19:59:21.525665867Z" level=info msg="Container to stop \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.525803 containerd[1460]: time="2024-10-08T19:59:21.525677719Z" level=info msg="Container to stop \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.525803 containerd[1460]: time="2024-10-08T19:59:21.525687608Z" level=info msg="Container to stop \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:59:21.532842 systemd[1]: cri-containerd-4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a.scope: Deactivated successfully. Oct 8 19:59:21.533426 containerd[1460]: time="2024-10-08T19:59:21.533151885Z" level=info msg="TearDown network for sandbox \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\" successfully" Oct 8 19:59:21.533426 containerd[1460]: time="2024-10-08T19:59:21.533196088Z" level=info msg="StopPodSandbox for \"82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45\" returns successfully" Oct 8 19:59:21.565093 containerd[1460]: time="2024-10-08T19:59:21.565002980Z" level=info msg="shim disconnected" id=4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a namespace=k8s.io Oct 8 19:59:21.565093 containerd[1460]: time="2024-10-08T19:59:21.565088061Z" level=warning msg="cleaning up after shim disconnected" id=4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a namespace=k8s.io Oct 8 19:59:21.565093 containerd[1460]: time="2024-10-08T19:59:21.565097479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:21.576275 kubelet[2622]: I1008 19:59:21.576227 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5f5798d-82dd-4643-9741-3d66121cb8b9-cilium-config-path\") pod \"a5f5798d-82dd-4643-9741-3d66121cb8b9\" (UID: \"a5f5798d-82dd-4643-9741-3d66121cb8b9\") " Oct 8 19:59:21.578019 kubelet[2622]: I1008 19:59:21.576917 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8r6d\" (UniqueName: \"kubernetes.io/projected/a5f5798d-82dd-4643-9741-3d66121cb8b9-kube-api-access-v8r6d\") pod \"a5f5798d-82dd-4643-9741-3d66121cb8b9\" (UID: \"a5f5798d-82dd-4643-9741-3d66121cb8b9\") " Oct 8 19:59:21.580594 kubelet[2622]: I1008 19:59:21.580541 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f5798d-82dd-4643-9741-3d66121cb8b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5f5798d-82dd-4643-9741-3d66121cb8b9" (UID: "a5f5798d-82dd-4643-9741-3d66121cb8b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:59:21.581558 kubelet[2622]: I1008 19:59:21.581519 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f5798d-82dd-4643-9741-3d66121cb8b9-kube-api-access-v8r6d" (OuterVolumeSpecName: "kube-api-access-v8r6d") pod "a5f5798d-82dd-4643-9741-3d66121cb8b9" (UID: "a5f5798d-82dd-4643-9741-3d66121cb8b9"). InnerVolumeSpecName "kube-api-access-v8r6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:21.582394 containerd[1460]: time="2024-10-08T19:59:21.582346603Z" level=info msg="TearDown network for sandbox \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" successfully" Oct 8 19:59:21.582394 containerd[1460]: time="2024-10-08T19:59:21.582384875Z" level=info msg="StopPodSandbox for \"4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a\" returns successfully" Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679074 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-hostproc\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679129 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-run\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679149 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-etc-cni-netd\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679178 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-hubble-tls\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679199 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-lib-modules\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679233 kubelet[2622]: I1008 19:59:21.679201 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-hostproc" (OuterVolumeSpecName: "hostproc") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679223 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cds\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-kube-api-access-64cds\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679252 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cni-path\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679245 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679273 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-config-path\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679356 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-net\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679540 kubelet[2622]: I1008 19:59:21.679386 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-cgroup\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679411 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-xtables-lock\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679444 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679464 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-kernel\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679490 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-bpf-maps\") pod \"69548cae-9431-4eb0-b839-3a9fd62b74de\" (UID: \"69548cae-9431-4eb0-b839-3a9fd62b74de\") " Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679542 2622 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5f5798d-82dd-4643-9741-3d66121cb8b9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679563 2622 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.679763 kubelet[2622]: I1008 19:59:21.679580 2622 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v8r6d\" (UniqueName: \"kubernetes.io/projected/a5f5798d-82dd-4643-9741-3d66121cb8b9-kube-api-access-v8r6d\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.680003 kubelet[2622]: I1008 19:59:21.679597 2622 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.680003 kubelet[2622]: I1008 19:59:21.679627 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.680003 kubelet[2622]: I1008 19:59:21.679655 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.680003 kubelet[2622]: I1008 19:59:21.679683 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.680003 kubelet[2622]: I1008 19:59:21.679706 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.680564 kubelet[2622]: I1008 19:59:21.680204 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.682884 kubelet[2622]: I1008 19:59:21.682834 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.682960 kubelet[2622]: I1008 19:59:21.682911 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.683028 kubelet[2622]: I1008 19:59:21.682939 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cni-path" (OuterVolumeSpecName: "cni-path") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:59:21.683804 kubelet[2622]: I1008 19:59:21.683773 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:59:21.683946 kubelet[2622]: I1008 19:59:21.683787 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:21.684546 kubelet[2622]: I1008 19:59:21.684504 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:59:21.684759 kubelet[2622]: I1008 19:59:21.684724 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-kube-api-access-64cds" (OuterVolumeSpecName: "kube-api-access-64cds") pod "69548cae-9431-4eb0-b839-3a9fd62b74de" (UID: "69548cae-9431-4eb0-b839-3a9fd62b74de"). InnerVolumeSpecName "kube-api-access-64cds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:59:21.759723 systemd[1]: Removed slice kubepods-burstable-pod69548cae_9431_4eb0_b839_3a9fd62b74de.slice - libcontainer container kubepods-burstable-pod69548cae_9431_4eb0_b839_3a9fd62b74de.slice. Oct 8 19:59:21.759826 systemd[1]: kubepods-burstable-pod69548cae_9431_4eb0_b839_3a9fd62b74de.slice: Consumed 7.291s CPU time. Oct 8 19:59:21.761473 systemd[1]: Removed slice kubepods-besteffort-poda5f5798d_82dd_4643_9741_3d66121cb8b9.slice - libcontainer container kubepods-besteffort-poda5f5798d_82dd_4643_9741_3d66121cb8b9.slice. Oct 8 19:59:21.779900 kubelet[2622]: I1008 19:59:21.779827 2622 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.779900 kubelet[2622]: I1008 19:59:21.779881 2622 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.779900 kubelet[2622]: I1008 19:59:21.779895 2622 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.779900 kubelet[2622]: I1008 19:59:21.779906 2622 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.779900 kubelet[2622]: I1008 19:59:21.779917 2622 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.779933 2622 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.779944 2622 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.779959 2622 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-64cds\" (UniqueName: \"kubernetes.io/projected/69548cae-9431-4eb0-b839-3a9fd62b74de-kube-api-access-64cds\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.779971 2622 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.779981 2622 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.780004 2622 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69548cae-9431-4eb0-b839-3a9fd62b74de-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.780274 kubelet[2622]: I1008 19:59:21.780015 2622 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69548cae-9431-4eb0-b839-3a9fd62b74de-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:59:21.814837 kubelet[2622]: E1008 19:59:21.814793 2622 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:59:22.030636 kubelet[2622]: I1008 19:59:22.030293 2622 scope.go:117] "RemoveContainer" containerID="eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd" Oct 8 19:59:22.032751 containerd[1460]: time="2024-10-08T19:59:22.032530164Z" level=info msg="RemoveContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\"" Oct 8 19:59:22.127111 containerd[1460]: time="2024-10-08T19:59:22.126876641Z" level=info msg="RemoveContainer for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" returns successfully" Oct 8 19:59:22.127355 kubelet[2622]: I1008 19:59:22.127220 2622 scope.go:117] "RemoveContainer" containerID="eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd" Oct 8 19:59:22.130832 containerd[1460]: time="2024-10-08T19:59:22.130781285Z" level=error msg="ContainerStatus for \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\": not found" Oct 8 19:59:22.131145 kubelet[2622]: E1008 19:59:22.131113 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\": not found" containerID="eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd" Oct 8 19:59:22.131229 kubelet[2622]: I1008 19:59:22.131150 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd"} err="failed to get container status \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb2b954e66cf0c6d4b0d1bc0f0a820756a12b36814dd66ca30da0ba62a8261dd\": not found" Oct 8 19:59:22.131270 kubelet[2622]: I1008 19:59:22.131232 2622 scope.go:117] "RemoveContainer" containerID="571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63" Oct 8 19:59:22.132453 containerd[1460]: time="2024-10-08T19:59:22.132377962Z" level=info msg="RemoveContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\"" Oct 8 19:59:22.152630 containerd[1460]: time="2024-10-08T19:59:22.152577111Z" level=info msg="RemoveContainer for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" returns successfully" Oct 8 19:59:22.152960 kubelet[2622]: I1008 19:59:22.152916 2622 scope.go:117] "RemoveContainer" containerID="a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11" Oct 8 19:59:22.154228 containerd[1460]: time="2024-10-08T19:59:22.154182936Z" level=info msg="RemoveContainer for \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\"" Oct 8 19:59:22.164488 containerd[1460]: time="2024-10-08T19:59:22.164451506Z" level=info msg="RemoveContainer for \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\" returns successfully" Oct 8 19:59:22.164781 kubelet[2622]: I1008 19:59:22.164752 2622 scope.go:117] "RemoveContainer" containerID="27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978" Oct 8 19:59:22.166076 containerd[1460]: time="2024-10-08T19:59:22.166033376Z" level=info msg="RemoveContainer for \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\"" Oct 8 19:59:22.191412 containerd[1460]: time="2024-10-08T19:59:22.191356219Z" level=info msg="RemoveContainer for \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\" returns successfully" Oct 8 19:59:22.191713 kubelet[2622]: I1008 19:59:22.191684 2622 scope.go:117] "RemoveContainer" containerID="301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba" Oct 8 19:59:22.193215 containerd[1460]: time="2024-10-08T19:59:22.193167784Z" level=info msg="RemoveContainer for \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\"" Oct 8 19:59:22.200967 containerd[1460]: time="2024-10-08T19:59:22.200900045Z" level=info msg="RemoveContainer for \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\" returns successfully" Oct 8 19:59:22.201472 kubelet[2622]: I1008 19:59:22.201409 2622 scope.go:117] "RemoveContainer" containerID="a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00" Oct 8 19:59:22.202675 containerd[1460]: time="2024-10-08T19:59:22.202632670Z" level=info msg="RemoveContainer for \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\"" Oct 8 19:59:22.251197 containerd[1460]: time="2024-10-08T19:59:22.251155943Z" level=info msg="RemoveContainer for \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\" returns successfully" Oct 8 19:59:22.251416 kubelet[2622]: I1008 19:59:22.251383 2622 scope.go:117] "RemoveContainer" containerID="571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63" Oct 8 19:59:22.251696 containerd[1460]: time="2024-10-08T19:59:22.251655270Z" level=error msg="ContainerStatus for \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\": not found" Oct 8 19:59:22.251885 kubelet[2622]: E1008 19:59:22.251846 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\": not found" containerID="571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63" Oct 8 19:59:22.251885 kubelet[2622]: I1008 19:59:22.251882 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63"} err="failed to get container status \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\": rpc error: code = NotFound desc = an error occurred when try to find container \"571bf4655abf813e486465818d602045642da4fa746481eb19b77f02970b1e63\": not found" Oct 8 19:59:22.251885 kubelet[2622]: I1008 19:59:22.251913 2622 scope.go:117] "RemoveContainer" containerID="a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11" Oct 8 19:59:22.252353 containerd[1460]: time="2024-10-08T19:59:22.252296076Z" level=error msg="ContainerStatus for \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\": not found" Oct 8 19:59:22.252511 kubelet[2622]: E1008 19:59:22.252481 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\": not found" containerID="a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11" Oct 8 19:59:22.252511 kubelet[2622]: I1008 19:59:22.252505 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11"} err="failed to get container status \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\": rpc error: code = NotFound desc = an error occurred when try to find container \"a94809f2f6b9a357a426d332a72610b7dab857d64171b90504c71fa7d5c22d11\": not found" Oct 8 19:59:22.252628 kubelet[2622]: I1008 19:59:22.252521 2622 scope.go:117] "RemoveContainer" containerID="27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978" Oct 8 19:59:22.252724 containerd[1460]: time="2024-10-08T19:59:22.252692337Z" level=error msg="ContainerStatus for \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\": not found" Oct 8 19:59:22.252821 kubelet[2622]: E1008 19:59:22.252795 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\": not found" containerID="27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978" Oct 8 19:59:22.252885 kubelet[2622]: I1008 19:59:22.252818 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978"} err="failed to get container status \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\": rpc error: code = NotFound desc = an error occurred when try to find container \"27ead402208f5d1a45d04a22e9402a9b37d0c8f5b6b94ef4c5371c4f2d2f2978\": not found" Oct 8 19:59:22.252885 kubelet[2622]: I1008 19:59:22.252836 2622 scope.go:117] "RemoveContainer" containerID="301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba" Oct 8 19:59:22.253023 containerd[1460]: time="2024-10-08T19:59:22.252979672Z" level=error msg="ContainerStatus for \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\": not found" Oct 8 19:59:22.253227 kubelet[2622]: E1008 19:59:22.253211 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\": not found" containerID="301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba" Oct 8 19:59:22.253268 kubelet[2622]: I1008 19:59:22.253229 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba"} err="failed to get container status \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\": rpc error: code = NotFound desc = an error occurred when try to find container \"301ea633b40e733b0c1617cc292e8e7c72b426ae9ead9c68765dd58e76876cba\": not found" Oct 8 19:59:22.253268 kubelet[2622]: I1008 19:59:22.253242 2622 scope.go:117] "RemoveContainer" containerID="a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00" Oct 8 19:59:22.253420 containerd[1460]: time="2024-10-08T19:59:22.253393055Z" level=error msg="ContainerStatus for \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\": not found" Oct 8 19:59:22.253507 kubelet[2622]: E1008 19:59:22.253489 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\": not found" containerID="a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00" Oct 8 19:59:22.253538 kubelet[2622]: I1008 19:59:22.253507 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00"} err="failed to get container status \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\": rpc error: code = NotFound desc = an error occurred when try to find container \"a73e1f278dbb684054ae85fe6dae657702b6a685f1a89ed7d735079d0678ea00\": not found" Oct 8 19:59:22.390643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a-rootfs.mount: Deactivated successfully. Oct 8 19:59:22.390747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f6a1565c093b4ff8830a86aec8d84d95a44229daa79f5c755b067244d96854a-shm.mount: Deactivated successfully. Oct 8 19:59:22.390822 systemd[1]: var-lib-kubelet-pods-69548cae\x2d9431\x2d4eb0\x2db839\x2d3a9fd62b74de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:59:22.390915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82dc90374726927888fefc3d318d03cb14eee3c7405fc75fea114149b144dc45-rootfs.mount: Deactivated successfully. Oct 8 19:59:22.391028 systemd[1]: var-lib-kubelet-pods-a5f5798d\x2d82dd\x2d4643\x2d9741\x2d3d66121cb8b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8r6d.mount: Deactivated successfully. Oct 8 19:59:22.391126 systemd[1]: var-lib-kubelet-pods-69548cae\x2d9431\x2d4eb0\x2db839\x2d3a9fd62b74de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64cds.mount: Deactivated successfully. Oct 8 19:59:22.391199 systemd[1]: var-lib-kubelet-pods-69548cae\x2d9431\x2d4eb0\x2db839\x2d3a9fd62b74de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:59:23.316595 sshd[4279]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:23.330368 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:37850.service: Deactivated successfully. Oct 8 19:59:23.332483 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:59:23.334166 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:59:23.344363 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:58904.service - OpenSSH per-connection server daemon (10.0.0.1:58904). Oct 8 19:59:23.345394 systemd-logind[1446]: Removed session 26. Oct 8 19:59:23.382113 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 58904 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:23.383736 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:23.387819 systemd-logind[1446]: New session 27 of user core. Oct 8 19:59:23.399185 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:59:23.628555 kubelet[2622]: I1008 19:59:23.628394 2622 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T19:59:23Z","lastTransitionTime":"2024-10-08T19:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 19:59:23.752524 kubelet[2622]: I1008 19:59:23.752483 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" path="/var/lib/kubelet/pods/69548cae-9431-4eb0-b839-3a9fd62b74de/volumes" Oct 8 19:59:23.753435 kubelet[2622]: I1008 19:59:23.753402 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f5798d-82dd-4643-9741-3d66121cb8b9" path="/var/lib/kubelet/pods/a5f5798d-82dd-4643-9741-3d66121cb8b9/volumes" Oct 8 19:59:23.934611 sshd[4442]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:23.948857 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:58904.service: Deactivated successfully. Oct 8 19:59:23.951522 kubelet[2622]: I1008 19:59:23.951471 2622 topology_manager.go:215] "Topology Admit Handler" podUID="c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4" podNamespace="kube-system" podName="cilium-64cgz" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951559 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5f5798d-82dd-4643-9741-3d66121cb8b9" containerName="cilium-operator" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951573 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="mount-cgroup" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951581 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="apply-sysctl-overwrites" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951588 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="clean-cilium-state" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951595 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="cilium-agent" Oct 8 19:59:23.951669 kubelet[2622]: E1008 19:59:23.951602 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="mount-bpf-fs" Oct 8 19:59:23.951669 kubelet[2622]: I1008 19:59:23.951633 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f5798d-82dd-4643-9741-3d66121cb8b9" containerName="cilium-operator" Oct 8 19:59:23.951669 kubelet[2622]: I1008 19:59:23.951640 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="69548cae-9431-4eb0-b839-3a9fd62b74de" containerName="cilium-agent" Oct 8 19:59:23.953297 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:59:23.956313 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:59:23.967375 systemd[1]: Started sshd@27-10.0.0.67:22-10.0.0.1:58910.service - OpenSSH per-connection server daemon (10.0.0.1:58910). Oct 8 19:59:23.967920 systemd-logind[1446]: Removed session 27. Oct 8 19:59:23.975312 systemd[1]: Created slice kubepods-burstable-podc28fd2c2_0ce1_4e08_a0ce_2bbaed996bb4.slice - libcontainer container kubepods-burstable-podc28fd2c2_0ce1_4e08_a0ce_2bbaed996bb4.slice. Oct 8 19:59:23.991146 kubelet[2622]: I1008 19:59:23.991114 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-bpf-maps\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991474 kubelet[2622]: I1008 19:59:23.991310 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-etc-cni-netd\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991474 kubelet[2622]: I1008 19:59:23.991349 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-lib-modules\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991474 kubelet[2622]: I1008 19:59:23.991362 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-host-proc-sys-kernel\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991474 kubelet[2622]: I1008 19:59:23.991376 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7m8h\" (UniqueName: \"kubernetes.io/projected/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-kube-api-access-s7m8h\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991474 kubelet[2622]: I1008 19:59:23.991393 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-xtables-lock\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991425 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-clustermesh-secrets\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991440 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-host-proc-sys-net\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991455 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-hubble-tls\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991541 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-cilium-run\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991590 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-cni-path\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991627 kubelet[2622]: I1008 19:59:23.991616 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-cilium-config-path\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991763 kubelet[2622]: I1008 19:59:23.991654 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-hostproc\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991763 kubelet[2622]: I1008 19:59:23.991678 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-cilium-ipsec-secrets\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:23.991763 kubelet[2622]: I1008 19:59:23.991695 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4-cilium-cgroup\") pod \"cilium-64cgz\" (UID: \"c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4\") " pod="kube-system/cilium-64cgz" Oct 8 19:59:24.003391 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 58910 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:24.004890 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.008444 systemd-logind[1446]: New session 28 of user core. Oct 8 19:59:24.020169 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:59:24.070288 sshd[4455]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:24.088135 systemd[1]: sshd@27-10.0.0.67:22-10.0.0.1:58910.service: Deactivated successfully. Oct 8 19:59:24.090248 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:59:24.091771 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:59:24.106432 systemd[1]: Started sshd@28-10.0.0.67:22-10.0.0.1:58916.service - OpenSSH per-connection server daemon (10.0.0.1:58916). Oct 8 19:59:24.114749 systemd-logind[1446]: Removed session 28. Oct 8 19:59:24.142148 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 58916 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 19:59:24.143562 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:59:24.147185 systemd-logind[1446]: New session 29 of user core. Oct 8 19:59:24.155326 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:59:24.279102 kubelet[2622]: E1008 19:59:24.278946 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:24.279797 containerd[1460]: time="2024-10-08T19:59:24.279544759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64cgz,Uid:c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4,Namespace:kube-system,Attempt:0,}" Oct 8 19:59:24.301188 containerd[1460]: time="2024-10-08T19:59:24.301095001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:59:24.301188 containerd[1460]: time="2024-10-08T19:59:24.301153823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:59:24.301188 containerd[1460]: time="2024-10-08T19:59:24.301168942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:24.301376 containerd[1460]: time="2024-10-08T19:59:24.301259875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:59:24.325185 systemd[1]: Started cri-containerd-39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6.scope - libcontainer container 39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6. Oct 8 19:59:24.346167 containerd[1460]: time="2024-10-08T19:59:24.346119363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64cgz,Uid:c28fd2c2-0ce1-4e08-a0ce-2bbaed996bb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\"" Oct 8 19:59:24.347147 kubelet[2622]: E1008 19:59:24.346980 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:24.349909 containerd[1460]: time="2024-10-08T19:59:24.349865193Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:59:24.365155 containerd[1460]: time="2024-10-08T19:59:24.365109304Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207\"" Oct 8 19:59:24.365631 containerd[1460]: time="2024-10-08T19:59:24.365608420Z" level=info msg="StartContainer for \"af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207\"" Oct 8 19:59:24.391178 systemd[1]: Started cri-containerd-af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207.scope - libcontainer container af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207. Oct 8 19:59:24.418035 containerd[1460]: time="2024-10-08T19:59:24.417909142Z" level=info msg="StartContainer for \"af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207\" returns successfully" Oct 8 19:59:24.426120 systemd[1]: cri-containerd-af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207.scope: Deactivated successfully. Oct 8 19:59:24.460919 containerd[1460]: time="2024-10-08T19:59:24.460854555Z" level=info msg="shim disconnected" id=af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207 namespace=k8s.io Oct 8 19:59:24.460919 containerd[1460]: time="2024-10-08T19:59:24.460917604Z" level=warning msg="cleaning up after shim disconnected" id=af7d8016eadb6f4ef54fe1e5f2a150ebde61595bdf7a951279a858f6b2bed207 namespace=k8s.io Oct 8 19:59:24.460919 containerd[1460]: time="2024-10-08T19:59:24.460926320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:25.064511 kubelet[2622]: E1008 19:59:25.064473 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:25.067475 containerd[1460]: time="2024-10-08T19:59:25.067415778Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:59:25.081760 containerd[1460]: time="2024-10-08T19:59:25.081707959Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d\"" Oct 8 19:59:25.082281 containerd[1460]: time="2024-10-08T19:59:25.082254706Z" level=info msg="StartContainer for \"2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d\"" Oct 8 19:59:25.112226 systemd[1]: Started cri-containerd-2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d.scope - libcontainer container 2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d. Oct 8 19:59:25.138853 containerd[1460]: time="2024-10-08T19:59:25.138811081Z" level=info msg="StartContainer for \"2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d\" returns successfully" Oct 8 19:59:25.145534 systemd[1]: cri-containerd-2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d.scope: Deactivated successfully. Oct 8 19:59:25.164983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d-rootfs.mount: Deactivated successfully. Oct 8 19:59:25.172263 containerd[1460]: time="2024-10-08T19:59:25.172191622Z" level=info msg="shim disconnected" id=2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d namespace=k8s.io Oct 8 19:59:25.172263 containerd[1460]: time="2024-10-08T19:59:25.172258088Z" level=warning msg="cleaning up after shim disconnected" id=2a28ec5381871e5e115695066b1866101c3f1d3532bf57aa52da153742837d0d namespace=k8s.io Oct 8 19:59:25.172263 containerd[1460]: time="2024-10-08T19:59:25.172270402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.067639 kubelet[2622]: E1008 19:59:26.067601 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:26.070242 containerd[1460]: time="2024-10-08T19:59:26.070125443Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:59:26.104744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849955879.mount: Deactivated successfully. Oct 8 19:59:26.106773 containerd[1460]: time="2024-10-08T19:59:26.106742803Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554\"" Oct 8 19:59:26.107390 containerd[1460]: time="2024-10-08T19:59:26.107359972Z" level=info msg="StartContainer for \"5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554\"" Oct 8 19:59:26.141311 systemd[1]: Started cri-containerd-5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554.scope - libcontainer container 5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554. Oct 8 19:59:26.169838 containerd[1460]: time="2024-10-08T19:59:26.169788210Z" level=info msg="StartContainer for \"5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554\" returns successfully" Oct 8 19:59:26.169940 systemd[1]: cri-containerd-5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554.scope: Deactivated successfully. Oct 8 19:59:26.192965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554-rootfs.mount: Deactivated successfully. Oct 8 19:59:26.198680 containerd[1460]: time="2024-10-08T19:59:26.198609741Z" level=info msg="shim disconnected" id=5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554 namespace=k8s.io Oct 8 19:59:26.198680 containerd[1460]: time="2024-10-08T19:59:26.198677620Z" level=warning msg="cleaning up after shim disconnected" id=5be54f5ef77daa40158f61ebf66addeceda929b63e25d590911815742a018554 namespace=k8s.io Oct 8 19:59:26.198680 containerd[1460]: time="2024-10-08T19:59:26.198688170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:26.816379 kubelet[2622]: E1008 19:59:26.816331 2622 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:59:27.071306 kubelet[2622]: E1008 19:59:27.071040 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:27.074032 containerd[1460]: time="2024-10-08T19:59:27.073985823Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:59:27.088153 containerd[1460]: time="2024-10-08T19:59:27.088107270Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba\"" Oct 8 19:59:27.088621 containerd[1460]: time="2024-10-08T19:59:27.088594453Z" level=info msg="StartContainer for \"43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba\"" Oct 8 19:59:27.121336 systemd[1]: Started cri-containerd-43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba.scope - libcontainer container 43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba. Oct 8 19:59:27.147405 systemd[1]: cri-containerd-43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba.scope: Deactivated successfully. Oct 8 19:59:27.150156 containerd[1460]: time="2024-10-08T19:59:27.150105421Z" level=info msg="StartContainer for \"43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba\" returns successfully" Oct 8 19:59:27.168269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba-rootfs.mount: Deactivated successfully. Oct 8 19:59:27.172691 containerd[1460]: time="2024-10-08T19:59:27.172630646Z" level=info msg="shim disconnected" id=43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba namespace=k8s.io Oct 8 19:59:27.172789 containerd[1460]: time="2024-10-08T19:59:27.172693595Z" level=warning msg="cleaning up after shim disconnected" id=43bac58ae1e4faa3718a85eefd9071a94a44e553d01f821fdf79c0bd21fd24ba namespace=k8s.io Oct 8 19:59:27.172789 containerd[1460]: time="2024-10-08T19:59:27.172702943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:59:28.076234 kubelet[2622]: E1008 19:59:28.076197 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:28.078804 containerd[1460]: time="2024-10-08T19:59:28.078759801Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:59:28.097778 containerd[1460]: time="2024-10-08T19:59:28.097726439Z" level=info msg="CreateContainer within sandbox \"39810729f163cd5242528ec8763e3951ba806fc841a5a3ec488723d7bacd85d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b544ed7e7f45e07e33a10fdbbf33be168e0c44f74a088e41f604d228da6b3431\"" Oct 8 19:59:28.098331 containerd[1460]: time="2024-10-08T19:59:28.098303221Z" level=info msg="StartContainer for \"b544ed7e7f45e07e33a10fdbbf33be168e0c44f74a088e41f604d228da6b3431\"" Oct 8 19:59:28.135279 systemd[1]: Started cri-containerd-b544ed7e7f45e07e33a10fdbbf33be168e0c44f74a088e41f604d228da6b3431.scope - libcontainer container b544ed7e7f45e07e33a10fdbbf33be168e0c44f74a088e41f604d228da6b3431. Oct 8 19:59:28.176233 containerd[1460]: time="2024-10-08T19:59:28.176168047Z" level=info msg="StartContainer for \"b544ed7e7f45e07e33a10fdbbf33be168e0c44f74a088e41f604d228da6b3431\" returns successfully" Oct 8 19:59:28.636085 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 19:59:29.081226 kubelet[2622]: E1008 19:59:29.081195 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:29.128465 kubelet[2622]: I1008 19:59:29.128395 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-64cgz" podStartSLOduration=6.128374739 podStartE2EDuration="6.128374739s" podCreationTimestamp="2024-10-08 19:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:59:29.128286041 +0000 UTC m=+97.472598266" watchObservedRunningTime="2024-10-08 19:59:29.128374739 +0000 UTC m=+97.472686964" Oct 8 19:59:30.280502 kubelet[2622]: E1008 19:59:30.280451 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:31.884395 systemd-networkd[1387]: lxc_health: Link UP Oct 8 19:59:31.890429 systemd-networkd[1387]: lxc_health: Gained carrier Oct 8 19:59:32.283465 kubelet[2622]: E1008 19:59:32.281678 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:33.088513 kubelet[2622]: E1008 19:59:33.088475 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:33.528283 systemd-networkd[1387]: lxc_health: Gained IPv6LL Oct 8 19:59:34.090975 kubelet[2622]: E1008 19:59:34.090931 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:59:38.953425 sshd[4466]: pam_unix(sshd:session): session closed for user core Oct 8 19:59:38.957959 systemd[1]: sshd@28-10.0.0.67:22-10.0.0.1:58916.service: Deactivated successfully. Oct 8 19:59:38.960230 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:59:38.960878 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:59:38.961990 systemd-logind[1446]: Removed session 29.