Oct 8 20:00:12.898978 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:00:12.899000 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:12.899011 kernel: BIOS-provided physical RAM map: Oct 8 20:00:12.899018 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 8 20:00:12.899024 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 8 20:00:12.899030 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 8 20:00:12.899037 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 8 20:00:12.899043 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 8 20:00:12.899050 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 8 20:00:12.899056 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 8 20:00:12.899064 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 8 20:00:12.899071 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Oct 8 20:00:12.899077 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Oct 8 20:00:12.899083 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Oct 8 20:00:12.899091 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 8 20:00:12.899098 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 8 20:00:12.899110 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 8 20:00:12.899118 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 8 20:00:12.899127 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 8 20:00:12.899134 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 8 20:00:12.899141 kernel: NX (Execute Disable) protection: active Oct 8 20:00:12.899147 kernel: APIC: Static calls initialized Oct 8 20:00:12.899154 kernel: efi: EFI v2.7 by EDK II Oct 8 20:00:12.899161 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Oct 8 20:00:12.899167 kernel: SMBIOS 2.8 present. Oct 8 20:00:12.899174 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 8 20:00:12.899181 kernel: Hypervisor detected: KVM Oct 8 20:00:12.899190 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:00:12.899197 kernel: kvm-clock: using sched offset of 3954373716 cycles Oct 8 20:00:12.899203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:00:12.899211 kernel: tsc: Detected 2794.748 MHz processor Oct 8 20:00:12.899218 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:00:12.899225 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:00:12.899232 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 8 20:00:12.899239 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 8 20:00:12.899246 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:00:12.899254 kernel: Using GB pages for direct mapping Oct 8 20:00:12.899261 kernel: Secure boot disabled Oct 8 20:00:12.899268 kernel: ACPI: Early table checksum verification disabled Oct 8 20:00:12.899275 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 8 20:00:12.899285 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 8 20:00:12.899293 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899300 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899309 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 8 20:00:12.899316 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899323 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899331 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899338 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:00:12.899345 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 20:00:12.899352 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 8 20:00:12.899361 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 8 20:00:12.899368 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 8 20:00:12.899376 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 8 20:00:12.899383 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 8 20:00:12.899390 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 8 20:00:12.899397 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 8 20:00:12.899404 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 8 20:00:12.899411 kernel: No NUMA configuration found Oct 8 20:00:12.899418 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 8 20:00:12.899428 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 8 20:00:12.899449 kernel: Zone ranges: Oct 8 20:00:12.899458 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:00:12.899467 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 8 20:00:12.899474 kernel: Normal empty Oct 8 20:00:12.899483 kernel: Movable zone start for each node Oct 8 20:00:12.899490 kernel: Early memory node ranges Oct 8 20:00:12.899497 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 8 20:00:12.899504 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 8 20:00:12.899512 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 8 20:00:12.899521 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 8 20:00:12.899528 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 8 20:00:12.899535 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 8 20:00:12.899542 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 8 20:00:12.899549 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:00:12.899556 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 8 20:00:12.899564 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 8 20:00:12.899571 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:00:12.899578 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 8 20:00:12.899590 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 8 20:00:12.899601 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 8 20:00:12.899609 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 8 20:00:12.899616 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:00:12.899624 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:00:12.899631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 8 20:00:12.899638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:00:12.899645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:00:12.899652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:00:12.899662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:00:12.899669 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:00:12.899676 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 8 20:00:12.899683 kernel: TSC deadline timer available Oct 8 20:00:12.899690 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 8 20:00:12.899697 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 8 20:00:12.899704 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 8 20:00:12.899711 kernel: kvm-guest: setup PV sched yield Oct 8 20:00:12.899718 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 20:00:12.899726 kernel: Booting paravirtualized kernel on KVM Oct 8 20:00:12.899735 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:00:12.899743 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 8 20:00:12.899754 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 8 20:00:12.899761 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 8 20:00:12.899769 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 8 20:00:12.899776 kernel: kvm-guest: PV spinlocks enabled Oct 8 20:00:12.899783 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 20:00:12.899791 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:12.899801 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:00:12.899808 kernel: random: crng init done Oct 8 20:00:12.899815 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:00:12.899823 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:00:12.899830 kernel: Fallback order for Node 0: 0 Oct 8 20:00:12.899837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 8 20:00:12.899844 kernel: Policy zone: DMA32 Oct 8 20:00:12.899852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:00:12.899859 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 171124K reserved, 0K cma-reserved) Oct 8 20:00:12.899869 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 20:00:12.899876 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:00:12.899883 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:00:12.899890 kernel: Dynamic Preempt: voluntary Oct 8 20:00:12.899905 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:00:12.899915 kernel: rcu: RCU event tracing is enabled. Oct 8 20:00:12.899923 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 20:00:12.899931 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:00:12.899938 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:00:12.899959 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:00:12.899966 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:00:12.899974 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 20:00:12.899984 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 8 20:00:12.899992 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:00:12.899999 kernel: Console: colour dummy device 80x25 Oct 8 20:00:12.900007 kernel: printk: console [ttyS0] enabled Oct 8 20:00:12.900014 kernel: ACPI: Core revision 20230628 Oct 8 20:00:12.900024 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 8 20:00:12.900032 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:00:12.900039 kernel: x2apic enabled Oct 8 20:00:12.900047 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:00:12.900054 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 8 20:00:12.900062 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 8 20:00:12.900069 kernel: kvm-guest: setup PV IPIs Oct 8 20:00:12.900077 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 8 20:00:12.900084 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 8 20:00:12.900094 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 8 20:00:12.900101 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 8 20:00:12.900109 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 8 20:00:12.900116 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 8 20:00:12.900124 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:00:12.900131 kernel: Spectre V2 : Mitigation: Retpolines Oct 8 20:00:12.900139 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:00:12.900146 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:00:12.900154 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 8 20:00:12.900163 kernel: RETBleed: Mitigation: untrained return thunk Oct 8 20:00:12.900171 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:00:12.900179 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:00:12.900186 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 8 20:00:12.900194 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 8 20:00:12.900202 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 8 20:00:12.900210 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:00:12.900217 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:00:12.900227 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:00:12.900234 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:00:12.900242 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 8 20:00:12.900250 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:00:12.900257 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:00:12.900265 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:00:12.900272 kernel: landlock: Up and running. Oct 8 20:00:12.900279 kernel: SELinux: Initializing. Oct 8 20:00:12.900287 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:00:12.900297 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:00:12.900304 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 8 20:00:12.900312 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:12.900320 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:12.900327 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:00:12.900335 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 8 20:00:12.900342 kernel: ... version: 0 Oct 8 20:00:12.900349 kernel: ... bit width: 48 Oct 8 20:00:12.900357 kernel: ... generic registers: 6 Oct 8 20:00:12.900367 kernel: ... value mask: 0000ffffffffffff Oct 8 20:00:12.900374 kernel: ... max period: 00007fffffffffff Oct 8 20:00:12.900381 kernel: ... fixed-purpose events: 0 Oct 8 20:00:12.900389 kernel: ... event mask: 000000000000003f Oct 8 20:00:12.900411 kernel: signal: max sigframe size: 1776 Oct 8 20:00:12.900426 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:00:12.900445 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:00:12.900459 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:00:12.900468 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:00:12.900477 kernel: .... node #0, CPUs: #1 #2 #3 Oct 8 20:00:12.900485 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 20:00:12.900492 kernel: smpboot: Max logical packages: 1 Oct 8 20:00:12.900500 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 8 20:00:12.900508 kernel: devtmpfs: initialized Oct 8 20:00:12.900516 kernel: x86/mm: Memory block size: 128MB Oct 8 20:00:12.900523 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 8 20:00:12.900531 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 8 20:00:12.902814 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 8 20:00:12.902826 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 8 20:00:12.902834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 8 20:00:12.902842 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:00:12.902849 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 20:00:12.902857 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:00:12.902864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:00:12.902872 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:00:12.902879 kernel: audit: type=2000 audit(1728417612.627:1): state=initialized audit_enabled=0 res=1 Oct 8 20:00:12.903962 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:00:12.903974 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:00:12.903981 kernel: cpuidle: using governor menu Oct 8 20:00:12.903989 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:00:12.903996 kernel: dca service started, version 1.12.1 Oct 8 20:00:12.904004 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 8 20:00:12.904011 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 8 20:00:12.904019 kernel: PCI: Using configuration type 1 for base access Oct 8 20:00:12.904026 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:00:12.904034 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:00:12.904044 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:00:12.904051 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:00:12.904059 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:00:12.904067 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:00:12.904074 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:00:12.904082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:00:12.904089 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:00:12.904096 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:00:12.904104 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:00:12.904113 kernel: ACPI: Interpreter enabled Oct 8 20:00:12.904121 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 20:00:12.904128 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:00:12.904136 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:00:12.904144 kernel: PCI: Using E820 reservations for host bridge windows Oct 8 20:00:12.904151 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 8 20:00:12.904158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:00:12.904338 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:00:12.904490 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 8 20:00:12.904614 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 8 20:00:12.904624 kernel: PCI host bridge to bus 0000:00 Oct 8 20:00:12.904747 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:00:12.904858 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:00:12.904986 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:00:12.905097 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 8 20:00:12.905210 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 8 20:00:12.905319 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 8 20:00:12.905429 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:00:12.905573 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 8 20:00:12.905705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 8 20:00:12.905827 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 8 20:00:12.905965 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 8 20:00:12.906106 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 8 20:00:12.906226 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 8 20:00:12.906347 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 8 20:00:12.906493 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:00:12.906615 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 8 20:00:12.906735 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 8 20:00:12.906860 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 8 20:00:12.907006 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 8 20:00:12.907131 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 8 20:00:12.907253 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 8 20:00:12.907372 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 8 20:00:12.907519 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 20:00:12.907651 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 8 20:00:12.907777 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 8 20:00:12.907907 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 8 20:00:12.908062 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 8 20:00:12.908192 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 8 20:00:12.908310 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 8 20:00:12.908446 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 8 20:00:12.908568 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 8 20:00:12.908700 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 8 20:00:12.908831 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 8 20:00:12.908965 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 8 20:00:12.908976 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:00:12.908984 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:00:12.908992 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:00:12.908999 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:00:12.909011 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 8 20:00:12.909018 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 8 20:00:12.909026 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 8 20:00:12.909033 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 8 20:00:12.909041 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 8 20:00:12.909049 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 8 20:00:12.909056 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 8 20:00:12.909064 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 8 20:00:12.909071 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 8 20:00:12.909081 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 8 20:00:12.909089 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 8 20:00:12.909097 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 8 20:00:12.909104 kernel: iommu: Default domain type: Translated Oct 8 20:00:12.909112 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:00:12.909120 kernel: efivars: Registered efivars operations Oct 8 20:00:12.909127 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:00:12.909136 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:00:12.909143 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 8 20:00:12.909153 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 8 20:00:12.909161 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 8 20:00:12.909168 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 8 20:00:12.909290 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 8 20:00:12.909410 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 8 20:00:12.909541 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 8 20:00:12.909552 kernel: vgaarb: loaded Oct 8 20:00:12.909559 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 8 20:00:12.909567 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 8 20:00:12.909579 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:00:12.909586 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:00:12.909594 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:00:12.909602 kernel: pnp: PnP ACPI init Oct 8 20:00:12.909733 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 8 20:00:12.909744 kernel: pnp: PnP ACPI: found 6 devices Oct 8 20:00:12.909752 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:00:12.909760 kernel: NET: Registered PF_INET protocol family Oct 8 20:00:12.909771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:00:12.909779 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 20:00:12.909787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:00:12.909794 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:00:12.909802 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 20:00:12.909810 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 20:00:12.909818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:00:12.909825 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:00:12.909833 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:00:12.909843 kernel: NET: Registered PF_XDP protocol family Oct 8 20:00:12.909977 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 8 20:00:12.910099 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 8 20:00:12.910219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:00:12.910337 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:00:12.910458 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:00:12.910570 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 8 20:00:12.910680 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 8 20:00:12.910795 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 8 20:00:12.910805 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:00:12.910812 kernel: Initialise system trusted keyrings Oct 8 20:00:12.910820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 20:00:12.910828 kernel: Key type asymmetric registered Oct 8 20:00:12.910835 kernel: Asymmetric key parser 'x509' registered Oct 8 20:00:12.910843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:00:12.910851 kernel: io scheduler mq-deadline registered Oct 8 20:00:12.910858 kernel: io scheduler kyber registered Oct 8 20:00:12.910869 kernel: io scheduler bfq registered Oct 8 20:00:12.910877 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:00:12.910885 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 8 20:00:12.910893 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 8 20:00:12.910901 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 8 20:00:12.910909 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:00:12.910916 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:00:12.910924 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:00:12.910932 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:00:12.910954 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:00:12.911085 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 8 20:00:12.911201 kernel: rtc_cmos 00:04: registered as rtc0 Oct 8 20:00:12.911211 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:00:12.911322 kernel: rtc_cmos 00:04: setting system clock to 2024-10-08T20:00:12 UTC (1728417612) Oct 8 20:00:12.911444 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 8 20:00:12.911455 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 8 20:00:12.911466 kernel: efifb: probing for efifb Oct 8 20:00:12.911474 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Oct 8 20:00:12.911482 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Oct 8 20:00:12.911489 kernel: efifb: scrolling: redraw Oct 8 20:00:12.911497 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Oct 8 20:00:12.911505 kernel: Console: switching to colour frame buffer device 100x37 Oct 8 20:00:12.911530 kernel: fb0: EFI VGA frame buffer device Oct 8 20:00:12.911541 kernel: pstore: Using crash dump compression: deflate Oct 8 20:00:12.911549 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 20:00:12.911559 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:00:12.911567 kernel: Segment Routing with IPv6 Oct 8 20:00:12.911575 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:00:12.911583 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:00:12.911591 kernel: Key type dns_resolver registered Oct 8 20:00:12.911599 kernel: IPI shorthand broadcast: enabled Oct 8 20:00:12.911607 kernel: sched_clock: Marking stable (571002364, 117857858)->(738035971, -49175749) Oct 8 20:00:12.911615 kernel: registered taskstats version 1 Oct 8 20:00:12.911623 kernel: Loading compiled-in X.509 certificates Oct 8 20:00:12.911631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:00:12.911641 kernel: Key type .fscrypt registered Oct 8 20:00:12.911649 kernel: Key type fscrypt-provisioning registered Oct 8 20:00:12.911657 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:00:12.911665 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:00:12.911673 kernel: ima: No architecture policies found Oct 8 20:00:12.911681 kernel: clk: Disabling unused clocks Oct 8 20:00:12.911689 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:00:12.911697 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:00:12.911707 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:00:12.911715 kernel: Run /init as init process Oct 8 20:00:12.911724 kernel: with arguments: Oct 8 20:00:12.911731 kernel: /init Oct 8 20:00:12.911740 kernel: with environment: Oct 8 20:00:12.911747 kernel: HOME=/ Oct 8 20:00:12.911755 kernel: TERM=linux Oct 8 20:00:12.911763 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:00:12.911773 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:00:12.911785 systemd[1]: Detected virtualization kvm. Oct 8 20:00:12.911794 systemd[1]: Detected architecture x86-64. Oct 8 20:00:12.911802 systemd[1]: Running in initrd. Oct 8 20:00:12.911815 systemd[1]: No hostname configured, using default hostname. Oct 8 20:00:12.911825 systemd[1]: Hostname set to . Oct 8 20:00:12.911834 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:00:12.911842 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:00:12.911851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:12.911859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:12.911868 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:00:12.911877 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:00:12.911885 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:00:12.911896 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:00:12.911907 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:00:12.911915 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:00:12.911924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:12.911932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:12.911941 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:00:12.911961 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:00:12.911972 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:00:12.911980 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:00:12.911989 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:12.911997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:12.912006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:00:12.912014 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:00:12.912023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:12.912031 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:12.912042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:12.912051 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:00:12.912060 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:00:12.912068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:00:12.912077 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:00:12.912085 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:00:12.912093 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:00:12.912102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:00:12.912110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:12.912121 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:12.912130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:12.912138 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:00:12.912147 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:00:12.912158 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:12.912185 systemd-journald[192]: Collecting audit messages is disabled. Oct 8 20:00:12.912206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:12.912215 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:12.912226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:00:12.912235 systemd-journald[192]: Journal started Oct 8 20:00:12.912254 systemd-journald[192]: Runtime Journal (/run/log/journal/74b65fc2d2ea41a39d34f49493a9f52b) is 6.0M, max 48.3M, 42.2M free. Oct 8 20:00:12.895510 systemd-modules-load[193]: Inserted module 'overlay' Oct 8 20:00:12.916264 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:00:12.914418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:12.930593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:00:12.928526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:00:12.933736 kernel: Bridge firewalling registered Oct 8 20:00:12.930763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:12.931262 systemd-modules-load[193]: Inserted module 'br_netfilter' Oct 8 20:00:12.933995 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:12.938689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:00:12.940225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:00:12.942280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:12.952690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:12.956146 dracut-cmdline[218]: dracut-dracut-053 Oct 8 20:00:12.958827 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:00:12.961103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:00:12.992501 systemd-resolved[234]: Positive Trust Anchors: Oct 8 20:00:12.992514 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:00:12.992547 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:00:12.994968 systemd-resolved[234]: Defaulting to hostname 'linux'. Oct 8 20:00:12.995968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:00:13.002457 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:13.033975 kernel: SCSI subsystem initialized Oct 8 20:00:13.043968 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:00:13.053970 kernel: iscsi: registered transport (tcp) Oct 8 20:00:13.074973 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:00:13.074997 kernel: QLogic iSCSI HBA Driver Oct 8 20:00:13.118626 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:13.126049 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:00:13.149232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:00:13.149269 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:00:13.150274 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:00:13.189970 kernel: raid6: avx2x4 gen() 29825 MB/s Oct 8 20:00:13.206966 kernel: raid6: avx2x2 gen() 30726 MB/s Oct 8 20:00:13.224054 kernel: raid6: avx2x1 gen() 25189 MB/s Oct 8 20:00:13.224073 kernel: raid6: using algorithm avx2x2 gen() 30726 MB/s Oct 8 20:00:13.242082 kernel: raid6: .... xor() 19491 MB/s, rmw enabled Oct 8 20:00:13.242134 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:00:13.262980 kernel: xor: automatically using best checksumming function avx Oct 8 20:00:13.418987 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:00:13.432692 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:13.448235 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:13.464167 systemd-udevd[412]: Using default interface naming scheme 'v255'. Oct 8 20:00:13.468483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:13.477172 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:00:13.497327 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Oct 8 20:00:13.534932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:13.547204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:00:13.621688 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:13.632227 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:00:13.644738 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:13.648149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:13.651007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:13.654070 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:00:13.660988 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 8 20:00:13.666165 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 20:00:13.667282 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:00:13.669726 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:00:13.674096 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:00:13.674146 kernel: GPT:9289727 != 19775487 Oct 8 20:00:13.674176 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:00:13.674206 kernel: GPT:9289727 != 19775487 Oct 8 20:00:13.674246 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:00:13.674261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:13.686488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:13.701998 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:00:13.702052 kernel: AES CTR mode by8 optimization enabled Oct 8 20:00:13.702067 kernel: libata version 3.00 loaded. Oct 8 20:00:13.711965 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (476) Oct 8 20:00:13.717285 kernel: ahci 0000:00:1f.2: version 3.0 Oct 8 20:00:13.717544 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 8 20:00:13.721718 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 8 20:00:13.721889 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 8 20:00:13.722049 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Oct 8 20:00:13.722060 kernel: scsi host0: ahci Oct 8 20:00:13.721700 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:00:13.724737 kernel: scsi host1: ahci Oct 8 20:00:13.724909 kernel: scsi host2: ahci Oct 8 20:00:13.725077 kernel: scsi host3: ahci Oct 8 20:00:13.725976 kernel: scsi host4: ahci Oct 8 20:00:13.726192 kernel: scsi host5: ahci Oct 8 20:00:13.727341 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 8 20:00:13.727367 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 8 20:00:13.728782 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 8 20:00:13.731060 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 8 20:00:13.731082 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 8 20:00:13.731093 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 8 20:00:13.738227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:00:13.743608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:00:13.745062 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:00:13.757663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:00:13.773094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:00:13.774307 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:13.774365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:13.776007 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:13.786339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:13.786361 disk-uuid[551]: Primary Header is updated. Oct 8 20:00:13.786361 disk-uuid[551]: Secondary Entries is updated. Oct 8 20:00:13.786361 disk-uuid[551]: Secondary Header is updated. Oct 8 20:00:13.778537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:13.792581 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:13.778592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:13.780901 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:13.789318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:13.814420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:13.841244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:00:13.857801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:14.049980 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 8 20:00:14.050056 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 8 20:00:14.050977 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 8 20:00:14.051983 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 8 20:00:14.053458 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 8 20:00:14.053478 kernel: ata3.00: applying bridge limits Oct 8 20:00:14.054972 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 8 20:00:14.056000 kernel: ata3.00: configured for UDMA/100 Oct 8 20:00:14.056076 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 8 20:00:14.057714 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:00:14.114995 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 8 20:00:14.115321 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:00:14.129008 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:00:14.793048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:00:14.793121 disk-uuid[552]: The operation has completed successfully. Oct 8 20:00:14.823374 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:00:14.823540 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:00:14.852255 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:00:14.858332 sh[593]: Success Oct 8 20:00:14.870973 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 8 20:00:14.907912 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:00:14.921806 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:00:14.926526 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:00:14.939188 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:00:14.939230 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:14.939241 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:00:14.940201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:00:14.941965 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:00:14.945859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:00:14.948584 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:00:14.961081 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:00:14.963698 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:00:14.971363 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:14.971405 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:14.971419 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:14.973984 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:14.983511 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:00:14.985168 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:14.994349 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:00:15.003116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:00:15.058923 ignition[685]: Ignition 2.19.0 Oct 8 20:00:15.058937 ignition[685]: Stage: fetch-offline Oct 8 20:00:15.058995 ignition[685]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:15.059007 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:15.059118 ignition[685]: parsed url from cmdline: "" Oct 8 20:00:15.059123 ignition[685]: no config URL provided Oct 8 20:00:15.059130 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:00:15.059142 ignition[685]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:00:15.059173 ignition[685]: op(1): [started] loading QEMU firmware config module Oct 8 20:00:15.059180 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 20:00:15.072992 ignition[685]: op(1): [finished] loading QEMU firmware config module Oct 8 20:00:15.073019 ignition[685]: QEMU firmware config was not found. Ignoring... Oct 8 20:00:15.086235 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:15.103187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:00:15.122223 ignition[685]: parsing config with SHA512: 32f94e674c8e69c5e2f91dbfffa7c61446ce55b44675ebe439f4119da772cefe41d82a638542463914808e489a5774c10a14340130bf69ffaf7579d3b7711fbf Oct 8 20:00:15.126201 unknown[685]: fetched base config from "system" Oct 8 20:00:15.126220 unknown[685]: fetched user config from "qemu" Oct 8 20:00:15.126651 ignition[685]: fetch-offline: fetch-offline passed Oct 8 20:00:15.126717 ignition[685]: Ignition finished successfully Oct 8 20:00:15.129197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:15.129671 systemd-networkd[782]: lo: Link UP Oct 8 20:00:15.129677 systemd-networkd[782]: lo: Gained carrier Oct 8 20:00:15.131695 systemd-networkd[782]: Enumeration completed Oct 8 20:00:15.131787 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:00:15.132203 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:15.132208 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:00:15.133434 systemd-networkd[782]: eth0: Link UP Oct 8 20:00:15.133438 systemd-networkd[782]: eth0: Gained carrier Oct 8 20:00:15.133444 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:15.134449 systemd[1]: Reached target network.target - Network. Oct 8 20:00:15.135584 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 20:00:15.141165 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:00:15.151079 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:00:15.155765 ignition[785]: Ignition 2.19.0 Oct 8 20:00:15.155772 ignition[785]: Stage: kargs Oct 8 20:00:15.155925 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:15.155935 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:15.160567 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:00:15.156794 ignition[785]: kargs: kargs passed Oct 8 20:00:15.156836 ignition[785]: Ignition finished successfully Oct 8 20:00:15.181139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:00:15.193887 ignition[795]: Ignition 2.19.0 Oct 8 20:00:15.193901 ignition[795]: Stage: disks Oct 8 20:00:15.194137 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:15.194154 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:15.195328 ignition[795]: disks: disks passed Oct 8 20:00:15.197727 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:00:15.195396 ignition[795]: Ignition finished successfully Oct 8 20:00:15.199171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:15.200921 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:00:15.203189 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:00:15.204425 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:00:15.206347 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:00:15.215125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:00:15.241751 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 20:00:15.268364 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:00:15.276068 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:00:15.364969 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:00:15.365137 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:00:15.367430 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:00:15.382127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:15.384616 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:00:15.385053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:00:15.385105 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:00:15.385133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:15.397415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:00:15.400424 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:00:15.402735 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Oct 8 20:00:15.404987 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:15.405029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:15.405044 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:15.407978 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:15.410353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:15.441747 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:00:15.446161 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:00:15.451678 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:00:15.456996 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:00:15.543051 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:15.553149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:00:15.556173 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:00:15.563980 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:15.585148 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:00:15.587658 ignition[926]: INFO : Ignition 2.19.0 Oct 8 20:00:15.587658 ignition[926]: INFO : Stage: mount Oct 8 20:00:15.587658 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:15.587658 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:15.592908 ignition[926]: INFO : mount: mount passed Oct 8 20:00:15.592908 ignition[926]: INFO : Ignition finished successfully Oct 8 20:00:15.590297 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:00:15.598074 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:00:15.938554 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:00:15.951149 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:00:15.958816 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Oct 8 20:00:15.958857 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:00:15.958869 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:00:15.960309 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:00:15.962980 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:00:15.964494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:00:15.986129 ignition[957]: INFO : Ignition 2.19.0 Oct 8 20:00:15.986129 ignition[957]: INFO : Stage: files Oct 8 20:00:15.988289 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:15.988289 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:15.988289 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:00:15.988289 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:00:15.988289 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:00:15.995923 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:00:15.995923 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:00:15.995923 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:00:15.995923 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:00:15.995923 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:00:15.991371 unknown[957]: wrote ssh authorized keys file for user: core Oct 8 20:00:16.077370 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:00:16.190501 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:00:16.190501 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:00:16.195047 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 8 20:00:16.571666 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 20:00:16.699022 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:00:16.701238 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:16.703279 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:00:16.705031 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:16.706877 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:00:16.708629 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:16.710412 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:00:16.712300 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:16.714243 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:00:16.716301 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:16.718411 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:00:16.720381 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:00:16.723215 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:00:16.725897 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:00:16.728295 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 8 20:00:17.032596 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 20:00:17.073107 systemd-networkd[782]: eth0: Gained IPv6LL Oct 8 20:00:17.408665 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 8 20:00:17.408665 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 20:00:17.412698 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 20:00:17.415051 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 20:00:17.450962 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:00:17.459250 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:00:17.461219 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 20:00:17.461219 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:17.461219 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:00:17.461219 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:17.461219 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:00:17.461219 ignition[957]: INFO : files: files passed Oct 8 20:00:17.461219 ignition[957]: INFO : Ignition finished successfully Oct 8 20:00:17.464044 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:00:17.478303 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:00:17.481691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:00:17.484673 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:00:17.484845 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:00:17.497833 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 20:00:17.502164 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:17.502164 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:17.505668 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:00:17.509505 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:17.512756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:00:17.530325 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:00:17.561701 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:00:17.561843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:00:17.564422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:00:17.566614 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:00:17.568585 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:00:17.578118 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:00:17.592745 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:17.605119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:00:17.615885 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:17.617174 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:17.619426 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:00:17.621436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:00:17.621545 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:00:17.623772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:00:17.625494 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:00:17.627528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:00:17.629628 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:00:17.631678 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:00:17.633830 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:00:17.636012 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:00:17.638293 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:00:17.640291 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:00:17.642517 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:00:17.644432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:00:17.644570 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:00:17.646686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:17.648423 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:17.650575 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:00:17.650680 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:17.652852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:00:17.652972 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:00:17.655206 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:00:17.655322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:00:17.657330 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:00:17.659098 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:00:17.664015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:17.666505 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:00:17.668976 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:00:17.671240 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:00:17.671358 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:00:17.673822 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:00:17.673911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:00:17.676757 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:00:17.676866 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:00:17.679342 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:00:17.679458 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:00:17.688106 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:00:17.691048 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:00:17.692522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:00:17.701815 ignition[1011]: INFO : Ignition 2.19.0 Oct 8 20:00:17.701815 ignition[1011]: INFO : Stage: umount Oct 8 20:00:17.701815 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:00:17.701815 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:00:17.692732 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:17.708054 ignition[1011]: INFO : umount: umount passed Oct 8 20:00:17.708054 ignition[1011]: INFO : Ignition finished successfully Oct 8 20:00:17.695253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:00:17.695408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:00:17.709523 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:00:17.709641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:00:17.712687 systemd[1]: Stopped target network.target - Network. Oct 8 20:00:17.714234 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:00:17.714290 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:00:17.716630 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:00:17.716678 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:00:17.718892 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:00:17.718941 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:00:17.721092 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:00:17.721154 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:00:17.723397 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:00:17.726782 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:00:17.730644 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:00:17.731330 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:00:17.731460 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:00:17.733019 systemd-networkd[782]: eth0: DHCPv6 lease lost Oct 8 20:00:17.735809 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:00:17.735967 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:00:17.739360 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:00:17.739523 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:00:17.743975 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:00:17.744049 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:17.754313 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:00:17.755700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:00:17.755807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:00:17.758488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:00:17.758556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:17.761443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:00:17.761554 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:17.765702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:00:17.765801 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:17.768178 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:17.778670 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:00:17.778869 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:00:17.799034 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:00:17.799248 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:17.801713 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:00:17.801762 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:17.803865 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:00:17.803909 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:17.805935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:00:17.806024 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:00:17.808507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:00:17.808571 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:00:17.810293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:00:17.810364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:00:17.821114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:00:17.823559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:00:17.823620 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:17.825908 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 20:00:17.825972 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:17.828459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:00:17.828516 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:17.829797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:00:17.829845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:17.832412 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:00:17.832534 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:00:18.086847 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:00:18.087014 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:00:18.089199 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:00:18.090997 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:00:18.091064 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:00:18.100321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:00:18.110573 systemd[1]: Switching root. Oct 8 20:00:18.146501 systemd-journald[192]: Journal stopped Oct 8 20:00:19.682774 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Oct 8 20:00:19.682843 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:00:19.682865 kernel: SELinux: policy capability open_perms=1 Oct 8 20:00:19.682883 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:00:19.682894 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:00:19.682906 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:00:19.682917 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:00:19.682928 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:00:19.682940 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:00:19.682983 kernel: audit: type=1403 audit(1728417618.873:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:00:19.682997 systemd[1]: Successfully loaded SELinux policy in 49.563ms. Oct 8 20:00:19.683026 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.237ms. Oct 8 20:00:19.683038 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:00:19.683051 systemd[1]: Detected virtualization kvm. Oct 8 20:00:19.683062 systemd[1]: Detected architecture x86-64. Oct 8 20:00:19.683074 systemd[1]: Detected first boot. Oct 8 20:00:19.683086 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:00:19.683098 zram_generator::config[1055]: No configuration found. Oct 8 20:00:19.683111 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:00:19.683126 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:00:19.683137 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:00:19.683154 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:00:19.683166 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:00:19.683178 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:00:19.683190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:00:19.683202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:00:19.683218 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:00:19.683238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:00:19.683263 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:00:19.683277 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:00:19.683289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:00:19.683302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:00:19.683314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:00:19.683326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:00:19.683338 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:00:19.683350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:00:19.683365 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:00:19.683377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:00:19.683389 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:00:19.683401 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:00:19.683413 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:00:19.683425 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:00:19.683436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:00:19.683448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:00:19.683463 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:00:19.683475 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:00:19.683486 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:00:19.683498 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:00:19.683510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:00:19.683528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:00:19.683539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:00:19.683552 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:00:19.683564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:00:19.684182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:00:19.684205 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:00:19.684218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:19.684231 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:00:19.684252 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:00:19.684264 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:00:19.684278 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:00:19.684291 systemd[1]: Reached target machines.target - Containers. Oct 8 20:00:19.684302 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:00:19.684317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:19.684330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:00:19.684342 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:00:19.684354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:19.684366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:00:19.684378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:19.684390 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:00:19.684403 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:19.684418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:00:19.684430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:00:19.684442 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:00:19.684454 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:00:19.684466 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:00:19.684478 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:00:19.684491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:00:19.684503 kernel: fuse: init (API version 7.39) Oct 8 20:00:19.684517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:00:19.684531 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:00:19.684545 kernel: loop: module loaded Oct 8 20:00:19.684556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:00:19.684569 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:00:19.684581 systemd[1]: Stopped verity-setup.service. Oct 8 20:00:19.684594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:19.684606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:00:19.684620 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:00:19.684666 systemd-journald[1118]: Collecting audit messages is disabled. Oct 8 20:00:19.684691 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:00:19.684703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:00:19.684716 systemd-journald[1118]: Journal started Oct 8 20:00:19.684741 systemd-journald[1118]: Runtime Journal (/run/log/journal/74b65fc2d2ea41a39d34f49493a9f52b) is 6.0M, max 48.3M, 42.2M free. Oct 8 20:00:19.432479 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:00:19.450807 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:00:19.451354 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:00:19.686973 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:00:19.688200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:00:19.689662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:00:19.691503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:00:19.693492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:00:19.693735 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:00:19.695744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:19.695962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:19.698106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:19.698356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:19.700557 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:00:19.700766 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:00:19.702442 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:19.702648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:19.704495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:00:19.706410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:00:19.708361 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:00:19.725911 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:00:19.726991 kernel: ACPI: bus type drm_connector registered Oct 8 20:00:19.735125 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:00:19.738472 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:00:19.754002 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:00:19.754053 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:00:19.756295 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:00:19.758904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:00:19.761412 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:00:19.762850 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:19.766172 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:00:19.771549 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:00:19.774091 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:00:19.776206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:00:19.782847 systemd-journald[1118]: Time spent on flushing to /var/log/journal/74b65fc2d2ea41a39d34f49493a9f52b is 21.070ms for 990 entries. Oct 8 20:00:19.782847 systemd-journald[1118]: System Journal (/var/log/journal/74b65fc2d2ea41a39d34f49493a9f52b) is 8.0M, max 195.6M, 187.6M free. Oct 8 20:00:20.030801 systemd-journald[1118]: Received client request to flush runtime journal. Oct 8 20:00:20.030855 kernel: loop0: detected capacity change from 0 to 211296 Oct 8 20:00:20.030872 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:00:20.030892 kernel: loop1: detected capacity change from 0 to 140768 Oct 8 20:00:20.030915 kernel: loop2: detected capacity change from 0 to 142488 Oct 8 20:00:19.780180 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:00:19.784167 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:00:19.789216 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:00:19.793273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:00:19.796747 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:00:19.798138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:00:19.800091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:00:19.801923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:00:19.805328 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:00:19.807652 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:00:19.827303 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:00:19.849788 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:00:19.886780 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Oct 8 20:00:19.886794 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Oct 8 20:00:19.887131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:00:19.898812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:00:20.008889 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:00:20.010587 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:00:20.021168 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:00:20.033578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:00:20.036870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:00:20.047376 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:00:20.053982 kernel: loop3: detected capacity change from 0 to 211296 Oct 8 20:00:20.057646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:00:20.058249 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:00:20.068813 kernel: loop4: detected capacity change from 0 to 140768 Oct 8 20:00:20.076854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:00:20.081983 kernel: loop5: detected capacity change from 0 to 142488 Oct 8 20:00:20.087195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:00:20.093608 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 20:00:20.094194 (sd-merge)[1192]: Merged extensions into '/usr'. Oct 8 20:00:20.099052 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:00:20.099064 systemd[1]: Reloading... Oct 8 20:00:20.109940 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Oct 8 20:00:20.109990 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Oct 8 20:00:20.161109 zram_generator::config[1224]: No configuration found. Oct 8 20:00:20.243074 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:00:20.293858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:20.344892 systemd[1]: Reloading finished in 245 ms. Oct 8 20:00:20.378459 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:00:20.380065 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:00:20.381772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:00:20.402313 systemd[1]: Starting ensure-sysext.service... Oct 8 20:00:20.404891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:00:20.409914 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:00:20.409937 systemd[1]: Reloading... Oct 8 20:00:20.429139 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:00:20.429469 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:00:20.430416 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:00:20.430744 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 8 20:00:20.430827 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 8 20:00:20.434504 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:00:20.434517 systemd-tmpfiles[1263]: Skipping /boot Oct 8 20:00:20.446702 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:00:20.446721 systemd-tmpfiles[1263]: Skipping /boot Oct 8 20:00:20.481983 zram_generator::config[1295]: No configuration found. Oct 8 20:00:20.590520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:20.650592 systemd[1]: Reloading finished in 240 ms. Oct 8 20:00:20.671424 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:00:20.694531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:00:20.701558 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:20.704059 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:00:20.706386 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:00:20.710116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:00:20.714326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:00:20.718412 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:00:20.724679 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.724939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:20.727327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:20.730208 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:20.735205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:20.736802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:20.742230 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:00:20.743773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.744862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:20.745455 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:20.747716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:20.748048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:20.753812 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:00:20.755965 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:20.756169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:20.757120 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Oct 8 20:00:20.766358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.766570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:20.771506 augenrules[1357]: No rules Oct 8 20:00:20.777167 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:20.780248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:20.787443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:20.788611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:20.792732 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:00:20.794064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.795088 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:00:20.797532 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:00:20.799367 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:20.801033 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:00:20.803673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:20.803849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:20.805509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:20.805699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:20.807503 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:20.807739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:20.812402 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:00:20.815943 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:00:20.831361 systemd[1]: Finished ensure-sysext.service. Oct 8 20:00:20.835475 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.835630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:00:20.843134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:00:20.847037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:00:20.850195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:00:20.853188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:00:20.854469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:00:20.856281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:00:20.860862 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:00:20.862140 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:00:20.862168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:00:20.862727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:00:20.862917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:00:20.868142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:00:20.868475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:00:20.870406 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:00:20.870627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:00:20.876388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:00:20.877669 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:00:20.893151 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1366) Oct 8 20:00:20.896371 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1366) Oct 8 20:00:20.898971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) Oct 8 20:00:20.906325 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:00:20.906516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:00:20.909596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:00:20.936053 systemd-resolved[1332]: Positive Trust Anchors: Oct 8 20:00:20.936070 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:00:20.936102 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:00:20.943673 systemd-resolved[1332]: Defaulting to hostname 'linux'. Oct 8 20:00:20.947474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:00:20.948942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:00:20.961685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:00:20.964967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 8 20:00:20.968332 systemd-networkd[1402]: lo: Link UP Oct 8 20:00:20.968343 systemd-networkd[1402]: lo: Gained carrier Oct 8 20:00:20.969137 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:00:20.970041 systemd-networkd[1402]: Enumeration completed Oct 8 20:00:20.970451 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:20.970456 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:00:20.971316 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:00:20.972842 systemd[1]: Reached target network.target - Network. Oct 8 20:00:20.973886 systemd-networkd[1402]: eth0: Link UP Oct 8 20:00:20.974303 systemd-networkd[1402]: eth0: Gained carrier Oct 8 20:00:20.974364 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:00:20.975963 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:00:20.976143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:00:20.980431 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 8 20:00:20.980690 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 8 20:00:20.980851 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 8 20:00:20.981070 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 8 20:00:20.987059 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:00:20.993520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:00:21.766367 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 20:00:21.766425 systemd-timesyncd[1403]: Initial clock synchronization to Tue 2024-10-08 20:00:21.765627 UTC. Oct 8 20:00:21.766977 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:00:21.767087 systemd-resolved[1332]: Clock change detected. Flushing caches. Oct 8 20:00:21.772916 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:00:21.774118 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:00:21.783901 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:00:21.788187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:00:21.859191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:00:21.883981 kernel: kvm_amd: TSC scaling supported Oct 8 20:00:21.884026 kernel: kvm_amd: Nested Virtualization enabled Oct 8 20:00:21.884039 kernel: kvm_amd: Nested Paging enabled Oct 8 20:00:21.884066 kernel: kvm_amd: LBR virtualization supported Oct 8 20:00:21.885080 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 8 20:00:21.885105 kernel: kvm_amd: Virtual GIF supported Oct 8 20:00:21.912901 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:00:21.941451 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:00:21.952081 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:00:21.961674 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:00:21.995303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:00:21.996930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:00:21.998119 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:00:21.999342 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:00:22.000643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:00:22.002144 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:00:22.003573 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:00:22.008563 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:00:22.009843 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:00:22.009874 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:00:22.010831 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:00:22.012427 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:00:22.015238 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:00:22.024917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:00:22.027461 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:00:22.029173 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:00:22.030358 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:00:22.031351 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:00:22.048108 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:00:22.048167 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:00:22.049864 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:00:22.052322 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:00:22.056332 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:00:22.060541 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:00:22.063712 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:00:22.065200 jq[1441]: false Oct 8 20:00:22.064980 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:00:22.067613 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:00:22.072516 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:00:22.087143 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:00:22.092327 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:00:22.099966 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:00:22.100203 extend-filesystems[1442]: Found loop3 Oct 8 20:00:22.100203 extend-filesystems[1442]: Found loop4 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found loop5 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found sr0 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda1 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda2 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda3 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found usr Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda4 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda6 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda7 Oct 8 20:00:22.103649 extend-filesystems[1442]: Found vda9 Oct 8 20:00:22.103649 extend-filesystems[1442]: Checking size of /dev/vda9 Oct 8 20:00:22.102402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:00:22.111659 dbus-daemon[1440]: [system] SELinux support is enabled Oct 8 20:00:22.123686 extend-filesystems[1442]: Resized partition /dev/vda9 Oct 8 20:00:22.103653 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:00:22.112670 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:00:22.118215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:00:22.125004 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:00:22.120496 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:00:22.129180 update_engine[1456]: I20241008 20:00:22.127403 1456 main.cc:92] Flatcar Update Engine starting Oct 8 20:00:22.129180 update_engine[1456]: I20241008 20:00:22.128942 1456 update_check_scheduler.cc:74] Next update check in 9m35s Oct 8 20:00:22.133217 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 20:00:22.127642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:00:22.138906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1376) Oct 8 20:00:22.134686 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:00:22.134957 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:00:22.135337 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:00:22.135859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:00:22.139137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:00:22.139389 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:00:22.140870 jq[1462]: true Oct 8 20:00:22.170927 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 20:00:22.178329 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:00:22.183445 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:00:22.195219 jq[1467]: true Oct 8 20:00:22.183473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:00:22.185156 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:00:22.185179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:00:22.189603 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:00:22.192108 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:00:22.197675 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:00:22.197675 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 20:00:22.197675 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 20:00:22.203591 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Oct 8 20:00:22.199998 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:00:22.204765 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:00:22.200328 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:00:22.209187 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 20:00:22.209219 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:00:22.214424 tar[1465]: linux-amd64/helm Oct 8 20:00:22.215951 systemd-logind[1453]: New seat seat0. Oct 8 20:00:22.222292 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:00:22.231159 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:00:22.245369 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:00:22.250464 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:00:22.257281 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:00:22.258964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:00:22.262451 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 20:00:22.267285 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:00:22.267590 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:00:22.277246 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:00:22.289602 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:00:22.301274 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:00:22.304479 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:00:22.306718 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:00:22.419134 containerd[1475]: time="2024-10-08T20:00:22.418982823Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:00:22.444818 containerd[1475]: time="2024-10-08T20:00:22.444690624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.446846 containerd[1475]: time="2024-10-08T20:00:22.446786325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:22.446846 containerd[1475]: time="2024-10-08T20:00:22.446825809Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:00:22.446846 containerd[1475]: time="2024-10-08T20:00:22.446844273Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:00:22.447112 containerd[1475]: time="2024-10-08T20:00:22.447078162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:00:22.447112 containerd[1475]: time="2024-10-08T20:00:22.447106585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447218 containerd[1475]: time="2024-10-08T20:00:22.447195963Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447218 containerd[1475]: time="2024-10-08T20:00:22.447214818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447510 containerd[1475]: time="2024-10-08T20:00:22.447477761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447510 containerd[1475]: time="2024-10-08T20:00:22.447503109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447549 containerd[1475]: time="2024-10-08T20:00:22.447519950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447549 containerd[1475]: time="2024-10-08T20:00:22.447533165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447673 containerd[1475]: time="2024-10-08T20:00:22.447645306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.447979 containerd[1475]: time="2024-10-08T20:00:22.447951430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:00:22.448149 containerd[1475]: time="2024-10-08T20:00:22.448109356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:00:22.448149 containerd[1475]: time="2024-10-08T20:00:22.448143009Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:00:22.448278 containerd[1475]: time="2024-10-08T20:00:22.448258836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:00:22.448349 containerd[1475]: time="2024-10-08T20:00:22.448331492Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:00:22.453424 containerd[1475]: time="2024-10-08T20:00:22.453373859Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:00:22.453424 containerd[1475]: time="2024-10-08T20:00:22.453428281Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:00:22.453547 containerd[1475]: time="2024-10-08T20:00:22.453448248Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:00:22.453547 containerd[1475]: time="2024-10-08T20:00:22.453469929Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:00:22.453547 containerd[1475]: time="2024-10-08T20:00:22.453487592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:00:22.453693 containerd[1475]: time="2024-10-08T20:00:22.453661258Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:00:22.454014 containerd[1475]: time="2024-10-08T20:00:22.453982119Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:00:22.454156 containerd[1475]: time="2024-10-08T20:00:22.454125438Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:00:22.454156 containerd[1475]: time="2024-10-08T20:00:22.454151627Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:00:22.454227 containerd[1475]: time="2024-10-08T20:00:22.454170603Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:00:22.454227 containerd[1475]: time="2024-10-08T20:00:22.454188386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454227 containerd[1475]: time="2024-10-08T20:00:22.454204446Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454227 containerd[1475]: time="2024-10-08T20:00:22.454218953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454234964Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454252877Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454268136Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454282943Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454296809Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:00:22.454319 containerd[1475]: time="2024-10-08T20:00:22.454319252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454336464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454351712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454367923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454384183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454401846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454417446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454432925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454466 containerd[1475]: time="2024-10-08T20:00:22.454448334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454475675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454491024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454507064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454522202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454540306Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454562668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454577656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454591592Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454643890Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:00:22.454660 containerd[1475]: time="2024-10-08T20:00:22.454664699Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454678916Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454696429Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454710085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454731535Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454749759Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:00:22.454905 containerd[1475]: time="2024-10-08T20:00:22.454762974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:00:22.455185 containerd[1475]: time="2024-10-08T20:00:22.455097551Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:00:22.455185 containerd[1475]: time="2024-10-08T20:00:22.455183482Z" level=info msg="Connect containerd service" Oct 8 20:00:22.455384 containerd[1475]: time="2024-10-08T20:00:22.455245569Z" level=info msg="using legacy CRI server" Oct 8 20:00:22.455384 containerd[1475]: time="2024-10-08T20:00:22.455256048Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:00:22.455444 containerd[1475]: time="2024-10-08T20:00:22.455397594Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:00:22.456284 containerd[1475]: time="2024-10-08T20:00:22.456243370Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:00:22.456508 containerd[1475]: time="2024-10-08T20:00:22.456441992Z" level=info msg="Start subscribing containerd event" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.456995460Z" level=info msg="Start recovering state" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457108022Z" level=info msg="Start event monitor" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457137096Z" level=info msg="Start snapshots syncer" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457154569Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457173865Z" level=info msg="Start streaming server" Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.456948492Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457355856Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:00:22.458341 containerd[1475]: time="2024-10-08T20:00:22.457425677Z" level=info msg="containerd successfully booted in 0.040335s" Oct 8 20:00:22.457554 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:00:22.583180 tar[1465]: linux-amd64/LICENSE Oct 8 20:00:22.583290 tar[1465]: linux-amd64/README.md Oct 8 20:00:22.600443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:00:22.631594 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:00:22.634158 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:34852.service - OpenSSH per-connection server daemon (10.0.0.1:34852). Oct 8 20:00:22.686870 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 34852 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:22.688841 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:22.698788 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:00:22.715361 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:00:22.719156 systemd-logind[1453]: New session 1 of user core. Oct 8 20:00:22.728274 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:00:22.742234 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:00:22.746976 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:00:22.864790 systemd[1536]: Queued start job for default target default.target. Oct 8 20:00:22.874398 systemd[1536]: Created slice app.slice - User Application Slice. Oct 8 20:00:22.874429 systemd[1536]: Reached target paths.target - Paths. Oct 8 20:00:22.874443 systemd[1536]: Reached target timers.target - Timers. Oct 8 20:00:22.876178 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:00:22.891077 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:00:22.891280 systemd[1536]: Reached target sockets.target - Sockets. Oct 8 20:00:22.891306 systemd[1536]: Reached target basic.target - Basic System. Oct 8 20:00:22.891354 systemd[1536]: Reached target default.target - Main User Target. Oct 8 20:00:22.891396 systemd[1536]: Startup finished in 136ms. Oct 8 20:00:22.891717 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:00:22.894610 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:00:22.928062 systemd-networkd[1402]: eth0: Gained IPv6LL Oct 8 20:00:22.931941 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:00:22.933946 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:00:22.946428 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 20:00:22.949443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:22.952217 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:00:22.970220 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:53416.service - OpenSSH per-connection server daemon (10.0.0.1:53416). Oct 8 20:00:22.991943 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:00:22.994035 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 20:00:22.994288 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 20:00:22.998403 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:00:23.013842 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 53416 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:23.015481 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:23.020093 systemd-logind[1453]: New session 2 of user core. Oct 8 20:00:23.030028 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:00:23.086236 sshd[1556]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:23.097122 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:53416.service: Deactivated successfully. Oct 8 20:00:23.098775 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:00:23.100307 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:00:23.105175 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:53418.service - OpenSSH per-connection server daemon (10.0.0.1:53418). Oct 8 20:00:23.107431 systemd-logind[1453]: Removed session 2. Oct 8 20:00:23.141547 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 53418 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:23.143349 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:23.147532 systemd-logind[1453]: New session 3 of user core. Oct 8 20:00:23.162014 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:00:23.215952 sshd[1571]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:23.219421 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:53418.service: Deactivated successfully. Oct 8 20:00:23.221228 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:00:23.221919 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:00:23.222763 systemd-logind[1453]: Removed session 3. Oct 8 20:00:23.591751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:23.593327 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:00:23.595101 systemd[1]: Startup finished in 707ms (kernel) + 6.157s (initrd) + 4.033s (userspace) = 10.898s. Oct 8 20:00:23.598431 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:24.084766 kubelet[1582]: E1008 20:00:24.084605 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:24.089458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:24.089680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:24.090025 systemd[1]: kubelet.service: Consumed 1.006s CPU time. Oct 8 20:00:33.227380 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:34028.service - OpenSSH per-connection server daemon (10.0.0.1:34028). Oct 8 20:00:33.269666 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 34028 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:33.271566 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.275958 systemd-logind[1453]: New session 4 of user core. Oct 8 20:00:33.290027 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:00:33.347105 sshd[1596]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:33.365782 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:34028.service: Deactivated successfully. Oct 8 20:00:33.367654 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:00:33.369077 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:00:33.376141 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:34042.service - OpenSSH per-connection server daemon (10.0.0.1:34042). Oct 8 20:00:33.377010 systemd-logind[1453]: Removed session 4. Oct 8 20:00:33.414322 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 34042 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:33.415963 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.419736 systemd-logind[1453]: New session 5 of user core. Oct 8 20:00:33.434991 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:00:33.485643 sshd[1603]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:33.498230 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:34042.service: Deactivated successfully. Oct 8 20:00:33.500171 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:00:33.502096 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:00:33.519294 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:34052.service - OpenSSH per-connection server daemon (10.0.0.1:34052). Oct 8 20:00:33.520310 systemd-logind[1453]: Removed session 5. Oct 8 20:00:33.555489 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 34052 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:33.557219 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.561462 systemd-logind[1453]: New session 6 of user core. Oct 8 20:00:33.576072 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:00:33.630158 sshd[1610]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:33.637551 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:34052.service: Deactivated successfully. Oct 8 20:00:33.639227 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:00:33.640953 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:00:33.659185 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:34058.service - OpenSSH per-connection server daemon (10.0.0.1:34058). Oct 8 20:00:33.660171 systemd-logind[1453]: Removed session 6. Oct 8 20:00:33.693358 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 34058 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:33.695024 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.699617 systemd-logind[1453]: New session 7 of user core. Oct 8 20:00:33.709020 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:00:33.768761 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:00:33.769331 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:33.797432 sudo[1620]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:33.799366 sshd[1617]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:33.814539 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:34058.service: Deactivated successfully. Oct 8 20:00:33.817022 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:00:33.819311 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:00:33.821144 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:34068.service - OpenSSH per-connection server daemon (10.0.0.1:34068). Oct 8 20:00:33.822044 systemd-logind[1453]: Removed session 7. Oct 8 20:00:33.868761 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 34068 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:33.870967 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:33.875564 systemd-logind[1453]: New session 8 of user core. Oct 8 20:00:33.898218 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:00:33.953200 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:00:33.953523 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:33.957598 sudo[1629]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:33.964333 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:00:33.964685 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:33.980184 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:33.981896 auditctl[1632]: No rules Oct 8 20:00:33.983162 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:00:33.983427 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:33.985492 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:00:34.021626 augenrules[1650]: No rules Oct 8 20:00:34.024097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:00:34.025355 sudo[1628]: pam_unix(sudo:session): session closed for user root Oct 8 20:00:34.027299 sshd[1625]: pam_unix(sshd:session): session closed for user core Oct 8 20:00:34.040301 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:34068.service: Deactivated successfully. Oct 8 20:00:34.042501 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:00:34.044447 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:00:34.053341 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:34084.service - OpenSSH per-connection server daemon (10.0.0.1:34084). Oct 8 20:00:34.054442 systemd-logind[1453]: Removed session 8. Oct 8 20:00:34.088521 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 34084 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:00:34.090203 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:00:34.091223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:00:34.101204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:34.103863 systemd-logind[1453]: New session 9 of user core. Oct 8 20:00:34.106678 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:00:34.160046 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:00:34.160422 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:00:34.264333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:34.269775 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:34.322439 kubelet[1679]: E1008 20:00:34.322278 1679 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:34.329968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:34.330201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:34.496165 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:00:34.496434 (dockerd)[1698]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:00:34.792993 dockerd[1698]: time="2024-10-08T20:00:34.792789588Z" level=info msg="Starting up" Oct 8 20:00:35.196303 dockerd[1698]: time="2024-10-08T20:00:35.196180409Z" level=info msg="Loading containers: start." Oct 8 20:00:35.313910 kernel: Initializing XFRM netlink socket Oct 8 20:00:35.391008 systemd-networkd[1402]: docker0: Link UP Oct 8 20:00:35.418551 dockerd[1698]: time="2024-10-08T20:00:35.418513173Z" level=info msg="Loading containers: done." Oct 8 20:00:35.433818 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3594652544-merged.mount: Deactivated successfully. Oct 8 20:00:35.435927 dockerd[1698]: time="2024-10-08T20:00:35.435860505Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:00:35.436042 dockerd[1698]: time="2024-10-08T20:00:35.436006218Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:00:35.436177 dockerd[1698]: time="2024-10-08T20:00:35.436148225Z" level=info msg="Daemon has completed initialization" Oct 8 20:00:35.475311 dockerd[1698]: time="2024-10-08T20:00:35.475105644Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:00:35.475366 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:00:36.176266 containerd[1475]: time="2024-10-08T20:00:36.176188474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:00:36.803971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432987500.mount: Deactivated successfully. Oct 8 20:00:37.932943 containerd[1475]: time="2024-10-08T20:00:37.932870250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.933787 containerd[1475]: time="2024-10-08T20:00:37.933712169Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 8 20:00:37.935220 containerd[1475]: time="2024-10-08T20:00:37.935179691Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.938564 containerd[1475]: time="2024-10-08T20:00:37.938517280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:37.939685 containerd[1475]: time="2024-10-08T20:00:37.939634996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 1.763395827s" Oct 8 20:00:37.939731 containerd[1475]: time="2024-10-08T20:00:37.939694317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 8 20:00:37.961202 containerd[1475]: time="2024-10-08T20:00:37.961156477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:00:39.981944 containerd[1475]: time="2024-10-08T20:00:39.981850289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:39.982915 containerd[1475]: time="2024-10-08T20:00:39.982851737Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 8 20:00:39.984538 containerd[1475]: time="2024-10-08T20:00:39.984409638Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:39.987838 containerd[1475]: time="2024-10-08T20:00:39.987770541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:39.989070 containerd[1475]: time="2024-10-08T20:00:39.989002682Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.027797394s" Oct 8 20:00:39.989070 containerd[1475]: time="2024-10-08T20:00:39.989044531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 8 20:00:40.014490 containerd[1475]: time="2024-10-08T20:00:40.014401193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:00:41.733070 containerd[1475]: time="2024-10-08T20:00:41.732980202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:41.734629 containerd[1475]: time="2024-10-08T20:00:41.734562520Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 8 20:00:41.736673 containerd[1475]: time="2024-10-08T20:00:41.736596534Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:41.741178 containerd[1475]: time="2024-10-08T20:00:41.741118875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:41.742816 containerd[1475]: time="2024-10-08T20:00:41.742323084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.727872288s" Oct 8 20:00:41.742816 containerd[1475]: time="2024-10-08T20:00:41.742663442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 8 20:00:41.768791 containerd[1475]: time="2024-10-08T20:00:41.768745525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:00:44.399139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88786760.mount: Deactivated successfully. Oct 8 20:00:44.400223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:00:44.411127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:44.568457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:44.574579 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:44.806455 kubelet[1944]: E1008 20:00:44.806287 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:44.811594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:44.811819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:45.898257 containerd[1475]: time="2024-10-08T20:00:45.898135435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:45.899441 containerd[1475]: time="2024-10-08T20:00:45.899405006Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 8 20:00:45.902918 containerd[1475]: time="2024-10-08T20:00:45.902895372Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:45.905439 containerd[1475]: time="2024-10-08T20:00:45.905403135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:45.906275 containerd[1475]: time="2024-10-08T20:00:45.906199248Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 4.137411724s" Oct 8 20:00:45.906275 containerd[1475]: time="2024-10-08T20:00:45.906258619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 8 20:00:45.957347 containerd[1475]: time="2024-10-08T20:00:45.957283910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:00:47.790969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938681730.mount: Deactivated successfully. Oct 8 20:00:49.030945 containerd[1475]: time="2024-10-08T20:00:49.030892263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.031844 containerd[1475]: time="2024-10-08T20:00:49.031803010Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 8 20:00:49.033639 containerd[1475]: time="2024-10-08T20:00:49.033581406Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.037069 containerd[1475]: time="2024-10-08T20:00:49.037025464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.038824 containerd[1475]: time="2024-10-08T20:00:49.038692360Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.0813547s" Oct 8 20:00:49.038824 containerd[1475]: time="2024-10-08T20:00:49.038806975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:00:49.071634 containerd[1475]: time="2024-10-08T20:00:49.071596628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:00:49.623723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857768564.mount: Deactivated successfully. Oct 8 20:00:49.631137 containerd[1475]: time="2024-10-08T20:00:49.631093995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.631936 containerd[1475]: time="2024-10-08T20:00:49.631899335Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 8 20:00:49.633430 containerd[1475]: time="2024-10-08T20:00:49.633343423Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.635559 containerd[1475]: time="2024-10-08T20:00:49.635530285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:49.636436 containerd[1475]: time="2024-10-08T20:00:49.636401018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 564.75619ms" Oct 8 20:00:49.636486 containerd[1475]: time="2024-10-08T20:00:49.636437746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:00:49.660712 containerd[1475]: time="2024-10-08T20:00:49.660631358Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:00:52.004598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944013593.mount: Deactivated successfully. Oct 8 20:00:54.366633 containerd[1475]: time="2024-10-08T20:00:54.366559424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:54.367286 containerd[1475]: time="2024-10-08T20:00:54.367194000Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 8 20:00:54.368506 containerd[1475]: time="2024-10-08T20:00:54.368468632Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:54.371792 containerd[1475]: time="2024-10-08T20:00:54.371709023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:00:54.373181 containerd[1475]: time="2024-10-08T20:00:54.373129325Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.712449269s" Oct 8 20:00:54.373181 containerd[1475]: time="2024-10-08T20:00:54.373164506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 8 20:00:54.813970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:00:54.826090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:54.972941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:54.977491 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:00:55.073217 kubelet[2105]: E1008 20:00:55.073014 2105 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:00:55.078652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:00:55.078865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:00:56.658426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:56.668239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:56.685520 systemd[1]: Reloading requested from client PID 2167 ('systemctl') (unit session-9.scope)... Oct 8 20:00:56.685543 systemd[1]: Reloading... Oct 8 20:00:56.775987 zram_generator::config[2206]: No configuration found. Oct 8 20:00:58.327748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:00:58.404091 systemd[1]: Reloading finished in 1718 ms. Oct 8 20:00:58.462828 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:58.467186 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:00:58.467417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:58.469039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:00:58.608871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:00:58.613160 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:00:58.654458 kubelet[2256]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:58.654458 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:00:58.654458 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:00:58.655853 kubelet[2256]: I1008 20:00:58.655793 2256 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:00:58.940969 kubelet[2256]: I1008 20:00:58.940804 2256 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:00:58.940969 kubelet[2256]: I1008 20:00:58.940847 2256 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:00:58.941335 kubelet[2256]: I1008 20:00:58.941150 2256 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:00:58.962169 kubelet[2256]: E1008 20:00:58.962122 2256 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.962966 kubelet[2256]: I1008 20:00:58.962931 2256 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:00:58.977165 kubelet[2256]: I1008 20:00:58.977118 2256 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:00:58.978721 kubelet[2256]: I1008 20:00:58.978694 2256 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:00:58.978969 kubelet[2256]: I1008 20:00:58.978945 2256 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:00:58.979411 kubelet[2256]: I1008 20:00:58.979388 2256 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:00:58.979411 kubelet[2256]: I1008 20:00:58.979409 2256 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:00:58.979586 kubelet[2256]: I1008 20:00:58.979562 2256 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:58.979710 kubelet[2256]: I1008 20:00:58.979688 2256 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:00:58.979710 kubelet[2256]: I1008 20:00:58.979709 2256 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:00:58.979771 kubelet[2256]: I1008 20:00:58.979743 2256 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:00:58.979771 kubelet[2256]: I1008 20:00:58.979758 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:00:58.980544 kubelet[2256]: W1008 20:00:58.980346 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.980544 kubelet[2256]: E1008 20:00:58.980423 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.980544 kubelet[2256]: W1008 20:00:58.980494 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.980544 kubelet[2256]: E1008 20:00:58.980526 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.980956 kubelet[2256]: I1008 20:00:58.980931 2256 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:00:58.983779 kubelet[2256]: I1008 20:00:58.983751 2256 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:00:58.984844 kubelet[2256]: W1008 20:00:58.984819 2256 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:00:58.985781 kubelet[2256]: I1008 20:00:58.985576 2256 server.go:1256] "Started kubelet" Oct 8 20:00:58.985781 kubelet[2256]: I1008 20:00:58.985656 2256 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:00:58.985959 kubelet[2256]: I1008 20:00:58.985842 2256 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:00:58.986743 kubelet[2256]: I1008 20:00:58.986149 2256 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:00:58.986743 kubelet[2256]: I1008 20:00:58.986633 2256 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:00:58.987339 kubelet[2256]: I1008 20:00:58.987306 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:00:58.990045 kubelet[2256]: E1008 20:00:58.989510 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:00:58.990045 kubelet[2256]: I1008 20:00:58.989545 2256 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:00:58.990045 kubelet[2256]: I1008 20:00:58.989607 2256 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:00:58.990045 kubelet[2256]: I1008 20:00:58.989665 2256 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:00:58.990045 kubelet[2256]: E1008 20:00:58.989934 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Oct 8 20:00:58.990045 kubelet[2256]: W1008 20:00:58.989959 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.990045 kubelet[2256]: E1008 20:00:58.990002 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:58.991234 kubelet[2256]: E1008 20:00:58.991186 2256 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92b40b5888b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:58.985547958 +0000 UTC m=+0.368271036,LastTimestamp:2024-10-08 20:00:58.985547958 +0000 UTC m=+0.368271036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:00:58.991834 kubelet[2256]: E1008 20:00:58.991618 2256 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:00:58.991834 kubelet[2256]: I1008 20:00:58.991739 2256 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:00:58.991834 kubelet[2256]: I1008 20:00:58.991748 2256 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:00:58.992015 kubelet[2256]: I1008 20:00:58.991817 2256 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:00:59.008124 kubelet[2256]: I1008 20:00:59.007751 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:00:59.008124 kubelet[2256]: I1008 20:00:59.007771 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:00:59.008124 kubelet[2256]: I1008 20:00:59.007786 2256 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:00:59.012686 kubelet[2256]: I1008 20:00:59.012650 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:00:59.014271 kubelet[2256]: I1008 20:00:59.013917 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:00:59.014271 kubelet[2256]: I1008 20:00:59.013954 2256 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:00:59.014271 kubelet[2256]: I1008 20:00:59.013976 2256 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:00:59.014271 kubelet[2256]: E1008 20:00:59.014033 2256 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:00:59.014955 kubelet[2256]: W1008 20:00:59.014874 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:59.014955 kubelet[2256]: E1008 20:00:59.014935 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:59.091457 kubelet[2256]: I1008 20:00:59.091306 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:00:59.091664 kubelet[2256]: E1008 20:00:59.091641 2256 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Oct 8 20:00:59.114979 kubelet[2256]: E1008 20:00:59.114863 2256 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:00:59.190590 kubelet[2256]: E1008 20:00:59.190547 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Oct 8 20:00:59.293093 kubelet[2256]: I1008 20:00:59.293046 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:00:59.293528 kubelet[2256]: E1008 20:00:59.293496 2256 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Oct 8 20:00:59.315626 kubelet[2256]: E1008 20:00:59.315572 2256 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:00:59.591325 kubelet[2256]: E1008 20:00:59.591198 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Oct 8 20:00:59.694812 kubelet[2256]: I1008 20:00:59.694773 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:00:59.695517 kubelet[2256]: E1008 20:00:59.695163 2256 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Oct 8 20:00:59.716256 kubelet[2256]: E1008 20:00:59.716219 2256 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:00:59.870683 kubelet[2256]: W1008 20:00:59.870495 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:00:59.870683 kubelet[2256]: E1008 20:00:59.870585 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.042686 kubelet[2256]: W1008 20:01:00.042597 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.042686 kubelet[2256]: E1008 20:01:00.042668 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.213357 kubelet[2256]: W1008 20:01:00.213215 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.213357 kubelet[2256]: E1008 20:01:00.213262 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.392420 kubelet[2256]: E1008 20:01:00.392387 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" Oct 8 20:01:00.497497 kubelet[2256]: I1008 20:01:00.497387 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:01:00.497839 kubelet[2256]: E1008 20:01:00.497818 2256 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Oct 8 20:01:00.517142 kubelet[2256]: E1008 20:01:00.517074 2256 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:01:00.566949 kubelet[2256]: W1008 20:01:00.566833 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.566949 kubelet[2256]: E1008 20:01:00.566947 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:00.997638 kubelet[2256]: I1008 20:01:00.997565 2256 policy_none.go:49] "None policy: Start" Oct 8 20:01:00.998586 kubelet[2256]: I1008 20:01:00.998541 2256 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:01:00.998586 kubelet[2256]: I1008 20:01:00.998564 2256 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:01:01.019946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:01:01.037345 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:01:01.041194 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:01:01.057544 kubelet[2256]: I1008 20:01:01.057296 2256 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:01:01.057703 kubelet[2256]: I1008 20:01:01.057677 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:01:01.059086 kubelet[2256]: E1008 20:01:01.059052 2256 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 20:01:01.129186 kubelet[2256]: E1008 20:01:01.129130 2256 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:01.980978 kubelet[2256]: W1008 20:01:01.980866 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:01.980978 kubelet[2256]: E1008 20:01:01.980976 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:01.993690 kubelet[2256]: E1008 20:01:01.993636 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="3.2s" Oct 8 20:01:02.100186 kubelet[2256]: I1008 20:01:02.100141 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:01:02.100604 kubelet[2256]: E1008 20:01:02.100526 2256 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Oct 8 20:01:02.117874 kubelet[2256]: I1008 20:01:02.117832 2256 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:01:02.118966 kubelet[2256]: I1008 20:01:02.118935 2256 topology_manager.go:215] "Topology Admit Handler" podUID="0d73c3fc5c5475772570ba752843dab5" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:01:02.120159 kubelet[2256]: I1008 20:01:02.120123 2256 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:01:02.125628 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 20:01:02.139613 systemd[1]: Created slice kubepods-burstable-pod0d73c3fc5c5475772570ba752843dab5.slice - libcontainer container kubepods-burstable-pod0d73c3fc5c5475772570ba752843dab5.slice. Oct 8 20:01:02.157700 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 20:01:02.207097 kubelet[2256]: I1008 20:01:02.206951 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:02.207097 kubelet[2256]: I1008 20:01:02.207026 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:02.207097 kubelet[2256]: I1008 20:01:02.207053 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:02.207097 kubelet[2256]: I1008 20:01:02.207098 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:02.207373 kubelet[2256]: I1008 20:01:02.207128 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:02.207373 kubelet[2256]: I1008 20:01:02.207176 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:02.207373 kubelet[2256]: I1008 20:01:02.207215 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:01:02.207373 kubelet[2256]: I1008 20:01:02.207273 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:02.207373 kubelet[2256]: I1008 20:01:02.207294 2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:02.331192 kubelet[2256]: W1008 20:01:02.331134 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:02.331192 kubelet[2256]: E1008 20:01:02.331180 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:02.398824 kubelet[2256]: W1008 20:01:02.398766 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:02.398824 kubelet[2256]: E1008 20:01:02.398805 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:02.437317 kubelet[2256]: E1008 20:01:02.437263 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:02.438091 containerd[1475]: time="2024-10-08T20:01:02.438041564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:02.455320 kubelet[2256]: E1008 20:01:02.455261 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:02.455898 containerd[1475]: time="2024-10-08T20:01:02.455823483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d73c3fc5c5475772570ba752843dab5,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:02.460157 kubelet[2256]: E1008 20:01:02.460132 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:02.460620 containerd[1475]: time="2024-10-08T20:01:02.460589066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:02.777676 kubelet[2256]: W1008 20:01:02.777625 2256 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:02.777676 kubelet[2256]: E1008 20:01:02.777667 2256 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Oct 8 20:01:03.553027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205057937.mount: Deactivated successfully. Oct 8 20:01:03.576554 containerd[1475]: time="2024-10-08T20:01:03.576474412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:01:03.580476 containerd[1475]: time="2024-10-08T20:01:03.580410235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 8 20:01:03.581574 containerd[1475]: time="2024-10-08T20:01:03.581534904Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:01:03.582529 containerd[1475]: time="2024-10-08T20:01:03.582489925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:01:03.583464 containerd[1475]: time="2024-10-08T20:01:03.583436091Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:01:03.584585 containerd[1475]: time="2024-10-08T20:01:03.584500661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:01:03.585508 containerd[1475]: time="2024-10-08T20:01:03.585463194Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:01:03.587109 containerd[1475]: time="2024-10-08T20:01:03.587067757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:01:03.589667 containerd[1475]: time="2024-10-08T20:01:03.589639861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.151511235s" Oct 8 20:01:03.590895 containerd[1475]: time="2024-10-08T20:01:03.590846473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.13020346s" Oct 8 20:01:03.593331 containerd[1475]: time="2024-10-08T20:01:03.593283076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.13734665s" Oct 8 20:01:03.796255 containerd[1475]: time="2024-10-08T20:01:03.795598911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:03.796255 containerd[1475]: time="2024-10-08T20:01:03.795668605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:03.796255 containerd[1475]: time="2024-10-08T20:01:03.795683669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.796255 containerd[1475]: time="2024-10-08T20:01:03.795806751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.799965 containerd[1475]: time="2024-10-08T20:01:03.799281199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:03.799965 containerd[1475]: time="2024-10-08T20:01:03.799939755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:03.800136 containerd[1475]: time="2024-10-08T20:01:03.800091494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.800329 containerd[1475]: time="2024-10-08T20:01:03.800276907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.803560 containerd[1475]: time="2024-10-08T20:01:03.803196228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:03.803560 containerd[1475]: time="2024-10-08T20:01:03.803240681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:03.803560 containerd[1475]: time="2024-10-08T20:01:03.803265412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.803560 containerd[1475]: time="2024-10-08T20:01:03.803335065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:03.848180 systemd[1]: Started cri-containerd-26f6cc4dc4cce4d165d5600a0561a1619f87f1c2364bf11994814e446cf42872.scope - libcontainer container 26f6cc4dc4cce4d165d5600a0561a1619f87f1c2364bf11994814e446cf42872. Oct 8 20:01:03.850314 systemd[1]: Started cri-containerd-2f3a9abd6d03492772eb06c237c39d9413d1dea009816f2043898d24fda5604e.scope - libcontainer container 2f3a9abd6d03492772eb06c237c39d9413d1dea009816f2043898d24fda5604e. Oct 8 20:01:03.855823 systemd[1]: Started cri-containerd-db6971c25af7b315eecef8f5a960aa02d80af6f52aadf6fcdf58797787b2d79d.scope - libcontainer container db6971c25af7b315eecef8f5a960aa02d80af6f52aadf6fcdf58797787b2d79d. Oct 8 20:01:03.934870 containerd[1475]: time="2024-10-08T20:01:03.934768773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d73c3fc5c5475772570ba752843dab5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f3a9abd6d03492772eb06c237c39d9413d1dea009816f2043898d24fda5604e\"" Oct 8 20:01:03.936989 kubelet[2256]: E1008 20:01:03.936928 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:03.938951 containerd[1475]: time="2024-10-08T20:01:03.938435006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"26f6cc4dc4cce4d165d5600a0561a1619f87f1c2364bf11994814e446cf42872\"" Oct 8 20:01:03.940295 kubelet[2256]: E1008 20:01:03.940045 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:03.942203 containerd[1475]: time="2024-10-08T20:01:03.942079793Z" level=info msg="CreateContainer within sandbox \"26f6cc4dc4cce4d165d5600a0561a1619f87f1c2364bf11994814e446cf42872\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:01:03.943807 containerd[1475]: time="2024-10-08T20:01:03.943755352Z" level=info msg="CreateContainer within sandbox \"2f3a9abd6d03492772eb06c237c39d9413d1dea009816f2043898d24fda5604e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:01:03.950252 containerd[1475]: time="2024-10-08T20:01:03.950215381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"db6971c25af7b315eecef8f5a960aa02d80af6f52aadf6fcdf58797787b2d79d\"" Oct 8 20:01:03.950910 kubelet[2256]: E1008 20:01:03.950870 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:03.954414 containerd[1475]: time="2024-10-08T20:01:03.954274655Z" level=info msg="CreateContainer within sandbox \"db6971c25af7b315eecef8f5a960aa02d80af6f52aadf6fcdf58797787b2d79d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:01:03.970408 containerd[1475]: time="2024-10-08T20:01:03.970282250Z" level=info msg="CreateContainer within sandbox \"26f6cc4dc4cce4d165d5600a0561a1619f87f1c2364bf11994814e446cf42872\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5dc665c38fcf2c5e7d78290da1a8f6da05333207fd450df15a89d2d6dd54125f\"" Oct 8 20:01:03.971659 containerd[1475]: time="2024-10-08T20:01:03.971556473Z" level=info msg="StartContainer for \"5dc665c38fcf2c5e7d78290da1a8f6da05333207fd450df15a89d2d6dd54125f\"" Oct 8 20:01:03.976473 containerd[1475]: time="2024-10-08T20:01:03.976429217Z" level=info msg="CreateContainer within sandbox \"2f3a9abd6d03492772eb06c237c39d9413d1dea009816f2043898d24fda5604e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a78f3f8f090115dd5a9d868876b0d496ac1e3e3fa5b9fd63bb387210786d59ea\"" Oct 8 20:01:03.977084 containerd[1475]: time="2024-10-08T20:01:03.977052346Z" level=info msg="StartContainer for \"a78f3f8f090115dd5a9d868876b0d496ac1e3e3fa5b9fd63bb387210786d59ea\"" Oct 8 20:01:03.988388 containerd[1475]: time="2024-10-08T20:01:03.988339605Z" level=info msg="CreateContainer within sandbox \"db6971c25af7b315eecef8f5a960aa02d80af6f52aadf6fcdf58797787b2d79d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30cf99b52bc71c901c54d6f9dde01c3bd3ebe9fc4b245dcee7947bc6567e8cf8\"" Oct 8 20:01:03.989373 containerd[1475]: time="2024-10-08T20:01:03.989290619Z" level=info msg="StartContainer for \"30cf99b52bc71c901c54d6f9dde01c3bd3ebe9fc4b245dcee7947bc6567e8cf8\"" Oct 8 20:01:04.003002 systemd[1]: Started cri-containerd-5dc665c38fcf2c5e7d78290da1a8f6da05333207fd450df15a89d2d6dd54125f.scope - libcontainer container 5dc665c38fcf2c5e7d78290da1a8f6da05333207fd450df15a89d2d6dd54125f. Oct 8 20:01:04.006094 systemd[1]: Started cri-containerd-a78f3f8f090115dd5a9d868876b0d496ac1e3e3fa5b9fd63bb387210786d59ea.scope - libcontainer container a78f3f8f090115dd5a9d868876b0d496ac1e3e3fa5b9fd63bb387210786d59ea. Oct 8 20:01:04.028182 systemd[1]: Started cri-containerd-30cf99b52bc71c901c54d6f9dde01c3bd3ebe9fc4b245dcee7947bc6567e8cf8.scope - libcontainer container 30cf99b52bc71c901c54d6f9dde01c3bd3ebe9fc4b245dcee7947bc6567e8cf8. Oct 8 20:01:04.062360 containerd[1475]: time="2024-10-08T20:01:04.062211732Z" level=info msg="StartContainer for \"a78f3f8f090115dd5a9d868876b0d496ac1e3e3fa5b9fd63bb387210786d59ea\" returns successfully" Oct 8 20:01:04.062360 containerd[1475]: time="2024-10-08T20:01:04.062331530Z" level=info msg="StartContainer for \"5dc665c38fcf2c5e7d78290da1a8f6da05333207fd450df15a89d2d6dd54125f\" returns successfully" Oct 8 20:01:04.086659 containerd[1475]: time="2024-10-08T20:01:04.086544229Z" level=info msg="StartContainer for \"30cf99b52bc71c901c54d6f9dde01c3bd3ebe9fc4b245dcee7947bc6567e8cf8\" returns successfully" Oct 8 20:01:05.036823 kubelet[2256]: E1008 20:01:05.036620 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:05.039463 kubelet[2256]: E1008 20:01:05.039358 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:05.040774 kubelet[2256]: E1008 20:01:05.040760 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:05.303343 kubelet[2256]: I1008 20:01:05.303193 2256 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:01:05.420030 kubelet[2256]: I1008 20:01:05.419984 2256 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:01:05.445818 kubelet[2256]: E1008 20:01:05.445764 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:05.454944 kubelet[2256]: E1008 20:01:05.454904 2256 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92b40b5888b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:58.985547958 +0000 UTC m=+0.368271036,LastTimestamp:2024-10-08 20:00:58.985547958 +0000 UTC m=+0.368271036,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:01:05.477843 kubelet[2256]: E1008 20:01:05.477785 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Oct 8 20:01:05.528318 kubelet[2256]: E1008 20:01:05.528271 2256 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92b40bb4ecd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:58.991602899 +0000 UTC m=+0.374325977,LastTimestamp:2024-10-08 20:00:58.991602899 +0000 UTC m=+0.374325977,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:01:05.546741 kubelet[2256]: E1008 20:01:05.546695 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:05.647497 kubelet[2256]: E1008 20:01:05.647345 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:05.711394 kubelet[2256]: E1008 20:01:05.711341 2256 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92b40c9c9bfc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:59.006786556 +0000 UTC m=+0.389509634,LastTimestamp:2024-10-08 20:00:59.006786556 +0000 UTC m=+0.389509634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:01:05.748455 kubelet[2256]: E1008 20:01:05.748413 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:05.774264 kubelet[2256]: E1008 20:01:05.774230 2256 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.17fc92b40c9cc308 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:00:59.006796552 +0000 UTC m=+0.389519630,LastTimestamp:2024-10-08 20:00:59.006796552 +0000 UTC m=+0.389519630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:01:05.848995 kubelet[2256]: E1008 20:01:05.848918 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:05.949514 kubelet[2256]: E1008 20:01:05.949370 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.043516 kubelet[2256]: E1008 20:01:06.043442 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:06.044225 kubelet[2256]: E1008 20:01:06.043620 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:06.044225 kubelet[2256]: E1008 20:01:06.043845 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:06.050505 kubelet[2256]: E1008 20:01:06.050443 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.151606 kubelet[2256]: E1008 20:01:06.151522 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.252263 kubelet[2256]: E1008 20:01:06.252209 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.352785 kubelet[2256]: E1008 20:01:06.352739 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.453435 kubelet[2256]: E1008 20:01:06.453367 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.554626 kubelet[2256]: E1008 20:01:06.554476 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.655210 kubelet[2256]: E1008 20:01:06.655147 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.755822 kubelet[2256]: E1008 20:01:06.755766 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.856494 kubelet[2256]: E1008 20:01:06.856379 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:06.957123 kubelet[2256]: E1008 20:01:06.957046 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.046773 kubelet[2256]: E1008 20:01:07.044985 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:07.057298 kubelet[2256]: E1008 20:01:07.057253 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.157782 kubelet[2256]: E1008 20:01:07.157597 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.258196 kubelet[2256]: E1008 20:01:07.258143 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.358747 kubelet[2256]: E1008 20:01:07.358679 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.459637 kubelet[2256]: E1008 20:01:07.459508 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.560317 kubelet[2256]: E1008 20:01:07.560258 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.660978 kubelet[2256]: E1008 20:01:07.660912 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.701296 update_engine[1456]: I20241008 20:01:07.701169 1456 update_attempter.cc:509] Updating boot flags... Oct 8 20:01:07.734004 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2541) Oct 8 20:01:07.762115 kubelet[2256]: E1008 20:01:07.761513 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.772986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2544) Oct 8 20:01:07.812238 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2544) Oct 8 20:01:07.861708 kubelet[2256]: E1008 20:01:07.861624 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:07.962359 kubelet[2256]: E1008 20:01:07.962295 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:08.046799 kubelet[2256]: E1008 20:01:08.046659 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:08.062896 kubelet[2256]: E1008 20:01:08.062805 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:08.163038 kubelet[2256]: E1008 20:01:08.162979 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:08.263620 kubelet[2256]: E1008 20:01:08.263557 2256 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:01:08.985824 kubelet[2256]: I1008 20:01:08.985788 2256 apiserver.go:52] "Watching apiserver" Oct 8 20:01:08.989893 kubelet[2256]: I1008 20:01:08.989850 2256 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:01:09.154128 systemd[1]: Reloading requested from client PID 2550 ('systemctl') (unit session-9.scope)... Oct 8 20:01:09.154146 systemd[1]: Reloading... Oct 8 20:01:09.224916 zram_generator::config[2592]: No configuration found. Oct 8 20:01:09.362904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:01:09.486248 systemd[1]: Reloading finished in 331 ms. Oct 8 20:01:09.545194 kubelet[2256]: I1008 20:01:09.545138 2256 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:01:09.545257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:09.564415 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:01:09.564782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:09.576274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:01:09.735980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:01:09.741826 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:01:09.801142 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:01:09.801142 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:01:09.801142 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:01:09.801534 kubelet[2634]: I1008 20:01:09.801193 2634 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:01:09.807148 kubelet[2634]: I1008 20:01:09.807123 2634 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:01:09.807148 kubelet[2634]: I1008 20:01:09.807143 2634 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:01:09.807298 kubelet[2634]: I1008 20:01:09.807284 2634 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:01:09.808610 kubelet[2634]: I1008 20:01:09.808590 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:01:09.810324 kubelet[2634]: I1008 20:01:09.810291 2634 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:01:09.817674 kubelet[2634]: I1008 20:01:09.817641 2634 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:01:09.817910 kubelet[2634]: I1008 20:01:09.817894 2634 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:01:09.818071 kubelet[2634]: I1008 20:01:09.818046 2634 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:01:09.818183 kubelet[2634]: I1008 20:01:09.818074 2634 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:01:09.818183 kubelet[2634]: I1008 20:01:09.818084 2634 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:01:09.818183 kubelet[2634]: I1008 20:01:09.818110 2634 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:01:09.818277 kubelet[2634]: I1008 20:01:09.818207 2634 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:01:09.818277 kubelet[2634]: I1008 20:01:09.818225 2634 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:01:09.818277 kubelet[2634]: I1008 20:01:09.818258 2634 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:01:09.818369 kubelet[2634]: I1008 20:01:09.818296 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:01:09.819176 kubelet[2634]: I1008 20:01:09.819154 2634 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:01:09.819380 kubelet[2634]: I1008 20:01:09.819358 2634 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:01:09.819769 kubelet[2634]: I1008 20:01:09.819747 2634 server.go:1256] "Started kubelet" Oct 8 20:01:09.823375 kubelet[2634]: I1008 20:01:09.823096 2634 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:01:09.824595 kubelet[2634]: I1008 20:01:09.824419 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:01:09.825924 kubelet[2634]: I1008 20:01:09.825904 2634 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:01:09.827093 kubelet[2634]: I1008 20:01:09.826968 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:01:09.828088 kubelet[2634]: I1008 20:01:09.827737 2634 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:01:09.829704 kubelet[2634]: I1008 20:01:09.829686 2634 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:01:09.830070 kubelet[2634]: I1008 20:01:09.830046 2634 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:01:09.830235 kubelet[2634]: I1008 20:01:09.830196 2634 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:01:09.832991 kubelet[2634]: I1008 20:01:09.830967 2634 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:01:09.832991 kubelet[2634]: I1008 20:01:09.831060 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:01:09.834067 kubelet[2634]: I1008 20:01:09.834052 2634 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:01:09.836695 kubelet[2634]: E1008 20:01:09.836636 2634 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:01:09.841127 kubelet[2634]: I1008 20:01:09.841097 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:01:09.842354 kubelet[2634]: I1008 20:01:09.842318 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:01:09.842354 kubelet[2634]: I1008 20:01:09.842357 2634 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:01:09.842510 kubelet[2634]: I1008 20:01:09.842382 2634 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:01:09.842510 kubelet[2634]: E1008 20:01:09.842436 2634 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:01:09.872903 kubelet[2634]: I1008 20:01:09.872802 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:01:09.872903 kubelet[2634]: I1008 20:01:09.872825 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:01:09.872903 kubelet[2634]: I1008 20:01:09.872843 2634 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:01:09.873117 kubelet[2634]: I1008 20:01:09.873016 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:01:09.873117 kubelet[2634]: I1008 20:01:09.873037 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:01:09.873117 kubelet[2634]: I1008 20:01:09.873044 2634 policy_none.go:49] "None policy: Start" Oct 8 20:01:09.873752 kubelet[2634]: I1008 20:01:09.873705 2634 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:01:09.873752 kubelet[2634]: I1008 20:01:09.873728 2634 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:01:09.873889 kubelet[2634]: I1008 20:01:09.873863 2634 state_mem.go:75] "Updated machine memory state" Oct 8 20:01:09.879182 kubelet[2634]: I1008 20:01:09.878386 2634 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:01:09.879182 kubelet[2634]: I1008 20:01:09.878951 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:01:09.935683 kubelet[2634]: I1008 20:01:09.935647 2634 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:01:09.942872 kubelet[2634]: I1008 20:01:09.942819 2634 topology_manager.go:215] "Topology Admit Handler" podUID="0d73c3fc5c5475772570ba752843dab5" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:01:09.943002 kubelet[2634]: I1008 20:01:09.942933 2634 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:01:09.943002 kubelet[2634]: I1008 20:01:09.942968 2634 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:01:10.031686 kubelet[2634]: I1008 20:01:10.031631 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:01:10.031686 kubelet[2634]: I1008 20:01:10.031694 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:10.031945 kubelet[2634]: I1008 20:01:10.031727 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:10.031945 kubelet[2634]: I1008 20:01:10.031748 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:10.031945 kubelet[2634]: I1008 20:01:10.031773 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:10.031945 kubelet[2634]: I1008 20:01:10.031798 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:10.031945 kubelet[2634]: I1008 20:01:10.031821 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:10.032102 kubelet[2634]: I1008 20:01:10.031980 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:01:10.032102 kubelet[2634]: I1008 20:01:10.032057 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d73c3fc5c5475772570ba752843dab5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d73c3fc5c5475772570ba752843dab5\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:01:10.080695 kubelet[2634]: I1008 20:01:10.080629 2634 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 20:01:10.080942 kubelet[2634]: I1008 20:01:10.080729 2634 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:01:10.121946 sudo[2669]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 20:01:10.122383 sudo[2669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 20:01:10.379820 kubelet[2634]: E1008 20:01:10.379689 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.380593 kubelet[2634]: E1008 20:01:10.380578 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.380731 kubelet[2634]: E1008 20:01:10.380707 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.646642 sudo[2669]: pam_unix(sudo:session): session closed for user root Oct 8 20:01:10.819495 kubelet[2634]: I1008 20:01:10.819454 2634 apiserver.go:52] "Watching apiserver" Oct 8 20:01:10.830762 kubelet[2634]: I1008 20:01:10.830729 2634 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:01:10.856721 kubelet[2634]: E1008 20:01:10.856383 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.856721 kubelet[2634]: E1008 20:01:10.856674 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.856721 kubelet[2634]: E1008 20:01:10.856708 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:10.966105 kubelet[2634]: I1008 20:01:10.965497 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.965430702 podStartE2EDuration="1.965430702s" podCreationTimestamp="2024-10-08 20:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:10.965210433 +0000 UTC m=+1.218818031" watchObservedRunningTime="2024-10-08 20:01:10.965430702 +0000 UTC m=+1.219038300" Oct 8 20:01:11.279556 kubelet[2634]: I1008 20:01:11.279505 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.279464414 podStartE2EDuration="2.279464414s" podCreationTimestamp="2024-10-08 20:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:11.279438628 +0000 UTC m=+1.533046226" watchObservedRunningTime="2024-10-08 20:01:11.279464414 +0000 UTC m=+1.533072012" Oct 8 20:01:11.373345 kubelet[2634]: I1008 20:01:11.373302 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.373262548 podStartE2EDuration="2.373262548s" podCreationTimestamp="2024-10-08 20:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:11.373096671 +0000 UTC m=+1.626704269" watchObservedRunningTime="2024-10-08 20:01:11.373262548 +0000 UTC m=+1.626870156" Oct 8 20:01:11.857906 kubelet[2634]: E1008 20:01:11.857816 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:12.201381 kubelet[2634]: E1008 20:01:12.201248 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:12.952639 sudo[1664]: pam_unix(sudo:session): session closed for user root Oct 8 20:01:12.964727 sshd[1658]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:12.968968 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:34084.service: Deactivated successfully. Oct 8 20:01:12.970700 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:01:12.970912 systemd[1]: session-9.scope: Consumed 4.603s CPU time, 190.7M memory peak, 0B memory swap peak. Oct 8 20:01:12.971324 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:01:12.972144 systemd-logind[1453]: Removed session 9. Oct 8 20:01:17.170054 kubelet[2634]: E1008 20:01:17.170020 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:17.865676 kubelet[2634]: E1008 20:01:17.865577 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.397658 kubelet[2634]: E1008 20:01:18.397621 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.867590 kubelet[2634]: E1008 20:01:18.867556 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:18.867821 kubelet[2634]: E1008 20:01:18.867793 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:21.655251 kubelet[2634]: I1008 20:01:21.655216 2634 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:01:21.655649 containerd[1475]: time="2024-10-08T20:01:21.655592769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:01:21.656037 kubelet[2634]: I1008 20:01:21.656012 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:01:21.744723 kubelet[2634]: I1008 20:01:21.744679 2634 topology_manager.go:215] "Topology Admit Handler" podUID="2e813b71-24c7-496d-97b8-b88c1fc9b7dd" podNamespace="kube-system" podName="kube-proxy-dl48c" Oct 8 20:01:21.749192 kubelet[2634]: I1008 20:01:21.749148 2634 topology_manager.go:215] "Topology Admit Handler" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" podNamespace="kube-system" podName="cilium-548kx" Oct 8 20:01:21.759155 systemd[1]: Created slice kubepods-besteffort-pod2e813b71_24c7_496d_97b8_b88c1fc9b7dd.slice - libcontainer container kubepods-besteffort-pod2e813b71_24c7_496d_97b8_b88c1fc9b7dd.slice. Oct 8 20:01:21.783589 systemd[1]: Created slice kubepods-burstable-podded72d2b_9f96_4e24_b97f_3de805d15af6.slice - libcontainer container kubepods-burstable-podded72d2b_9f96_4e24_b97f_3de805d15af6.slice. Oct 8 20:01:21.898668 kubelet[2634]: I1008 20:01:21.898610 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-bpf-maps\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898668 kubelet[2634]: I1008 20:01:21.898666 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ded72d2b-9f96-4e24-b97f-3de805d15af6-clustermesh-secrets\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898694 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-config-path\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898720 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-cgroup\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898748 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-xtables-lock\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898774 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-hostproc\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898799 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-net\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.898907 kubelet[2634]: I1008 20:01:21.898823 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-kube-proxy\") pod \"kube-proxy-dl48c\" (UID: \"2e813b71-24c7-496d-97b8-b88c1fc9b7dd\") " pod="kube-system/kube-proxy-dl48c" Oct 8 20:01:21.899109 kubelet[2634]: I1008 20:01:21.898956 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-kernel\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899109 kubelet[2634]: I1008 20:01:21.899027 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwvz\" (UniqueName: \"kubernetes.io/projected/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-kube-api-access-pwwvz\") pod \"kube-proxy-dl48c\" (UID: \"2e813b71-24c7-496d-97b8-b88c1fc9b7dd\") " pod="kube-system/kube-proxy-dl48c" Oct 8 20:01:21.899109 kubelet[2634]: I1008 20:01:21.899055 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-run\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899109 kubelet[2634]: I1008 20:01:21.899081 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-hubble-tls\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899109 kubelet[2634]: I1008 20:01:21.899110 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skq2m\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899266 kubelet[2634]: I1008 20:01:21.899137 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-lib-modules\") pod \"kube-proxy-dl48c\" (UID: \"2e813b71-24c7-496d-97b8-b88c1fc9b7dd\") " pod="kube-system/kube-proxy-dl48c" Oct 8 20:01:21.899266 kubelet[2634]: I1008 20:01:21.899212 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cni-path\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899340 kubelet[2634]: I1008 20:01:21.899290 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-xtables-lock\") pod \"kube-proxy-dl48c\" (UID: \"2e813b71-24c7-496d-97b8-b88c1fc9b7dd\") " pod="kube-system/kube-proxy-dl48c" Oct 8 20:01:21.899340 kubelet[2634]: I1008 20:01:21.899334 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-etc-cni-netd\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:21.899437 kubelet[2634]: I1008 20:01:21.899365 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-lib-modules\") pod \"cilium-548kx\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " pod="kube-system/cilium-548kx" Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.100811 2634 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.100911 2634 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.100940 2634 projected.go:200] Error preparing data for projected volume kube-api-access-skq2m for pod kube-system/cilium-548kx: configmap "kube-root-ca.crt" not found Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.100998 2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m podName:ded72d2b-9f96-4e24-b97f-3de805d15af6 nodeName:}" failed. No retries permitted until 2024-10-08 20:01:22.600977419 +0000 UTC m=+12.854585017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-skq2m" (UniqueName: "kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m") pod "cilium-548kx" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6") : configmap "kube-root-ca.crt" not found Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.100920 2634 projected.go:200] Error preparing data for projected volume kube-api-access-pwwvz for pod kube-system/kube-proxy-dl48c: configmap "kube-root-ca.crt" not found Oct 8 20:01:22.100898 kubelet[2634]: E1008 20:01:22.101077 2634 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-kube-api-access-pwwvz podName:2e813b71-24c7-496d-97b8-b88c1fc9b7dd nodeName:}" failed. No retries permitted until 2024-10-08 20:01:22.601050141 +0000 UTC m=+12.854657809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pwwvz" (UniqueName: "kubernetes.io/projected/2e813b71-24c7-496d-97b8-b88c1fc9b7dd-kube-api-access-pwwvz") pod "kube-proxy-dl48c" (UID: "2e813b71-24c7-496d-97b8-b88c1fc9b7dd") : configmap "kube-root-ca.crt" not found Oct 8 20:01:22.204579 kubelet[2634]: E1008 20:01:22.204534 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:22.458867 kubelet[2634]: I1008 20:01:22.457643 2634 topology_manager.go:215] "Topology Admit Handler" podUID="4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" podNamespace="kube-system" podName="cilium-operator-5cc964979-sglc5" Oct 8 20:01:22.467862 systemd[1]: Created slice kubepods-besteffort-pod4fc06297_dd6c_4cfa_bb3c_74c3085a8ba2.slice - libcontainer container kubepods-besteffort-pod4fc06297_dd6c_4cfa_bb3c_74c3085a8ba2.slice. Oct 8 20:01:22.604645 kubelet[2634]: I1008 20:01:22.604011 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-cilium-config-path\") pod \"cilium-operator-5cc964979-sglc5\" (UID: \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\") " pod="kube-system/cilium-operator-5cc964979-sglc5" Oct 8 20:01:22.604645 kubelet[2634]: I1008 20:01:22.604158 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmst7\" (UniqueName: \"kubernetes.io/projected/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-kube-api-access-mmst7\") pod \"cilium-operator-5cc964979-sglc5\" (UID: \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\") " pod="kube-system/cilium-operator-5cc964979-sglc5" Oct 8 20:01:22.676375 kubelet[2634]: E1008 20:01:22.676315 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:22.676998 containerd[1475]: time="2024-10-08T20:01:22.676945810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dl48c,Uid:2e813b71-24c7-496d-97b8-b88c1fc9b7dd,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:22.686658 kubelet[2634]: E1008 20:01:22.686613 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:22.687109 containerd[1475]: time="2024-10-08T20:01:22.687049525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-548kx,Uid:ded72d2b-9f96-4e24-b97f-3de805d15af6,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:22.993366 containerd[1475]: time="2024-10-08T20:01:22.993259763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:22.993366 containerd[1475]: time="2024-10-08T20:01:22.993317707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:22.993366 containerd[1475]: time="2024-10-08T20:01:22.993329910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:22.993676 containerd[1475]: time="2024-10-08T20:01:22.993423058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:23.075453 kubelet[2634]: E1008 20:01:23.075328 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.076017 containerd[1475]: time="2024-10-08T20:01:23.075900548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sglc5,Uid:4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:23.107955 containerd[1475]: time="2024-10-08T20:01:23.107800238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:23.108074 containerd[1475]: time="2024-10-08T20:01:23.107934501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:23.108074 containerd[1475]: time="2024-10-08T20:01:23.107971348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:23.109073 containerd[1475]: time="2024-10-08T20:01:23.108870134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:23.147539 containerd[1475]: time="2024-10-08T20:01:23.147013730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:23.147539 containerd[1475]: time="2024-10-08T20:01:23.147088675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:23.147539 containerd[1475]: time="2024-10-08T20:01:23.147113561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:23.147539 containerd[1475]: time="2024-10-08T20:01:23.147240941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:23.147406 systemd[1]: Started cri-containerd-23a881e8783324e34ff5a0bebf40e15d02eea7a1d037c7a9e7836a08703fd25e.scope - libcontainer container 23a881e8783324e34ff5a0bebf40e15d02eea7a1d037c7a9e7836a08703fd25e. Oct 8 20:01:23.170191 systemd[1]: Started cri-containerd-4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510.scope - libcontainer container 4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510. Oct 8 20:01:23.179275 systemd[1]: Started cri-containerd-45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d.scope - libcontainer container 45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d. Oct 8 20:01:23.200517 containerd[1475]: time="2024-10-08T20:01:23.200467304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dl48c,Uid:2e813b71-24c7-496d-97b8-b88c1fc9b7dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a881e8783324e34ff5a0bebf40e15d02eea7a1d037c7a9e7836a08703fd25e\"" Oct 8 20:01:23.201386 kubelet[2634]: E1008 20:01:23.201358 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.209905 containerd[1475]: time="2024-10-08T20:01:23.209454482Z" level=info msg="CreateContainer within sandbox \"23a881e8783324e34ff5a0bebf40e15d02eea7a1d037c7a9e7836a08703fd25e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:01:23.211750 containerd[1475]: time="2024-10-08T20:01:23.211388631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-548kx,Uid:ded72d2b-9f96-4e24-b97f-3de805d15af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\"" Oct 8 20:01:23.212344 kubelet[2634]: E1008 20:01:23.212326 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.214904 containerd[1475]: time="2024-10-08T20:01:23.214844905Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 20:01:23.221719 containerd[1475]: time="2024-10-08T20:01:23.221661376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-sglc5,Uid:4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\"" Oct 8 20:01:23.222888 kubelet[2634]: E1008 20:01:23.222862 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.233197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208676786.mount: Deactivated successfully. Oct 8 20:01:23.236340 containerd[1475]: time="2024-10-08T20:01:23.236235172Z" level=info msg="CreateContainer within sandbox \"23a881e8783324e34ff5a0bebf40e15d02eea7a1d037c7a9e7836a08703fd25e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f424c3049904a4c44449bfcc32b44ffe2c87567e5684a9fdb96e0697b57d7fb\"" Oct 8 20:01:23.236787 containerd[1475]: time="2024-10-08T20:01:23.236753500Z" level=info msg="StartContainer for \"5f424c3049904a4c44449bfcc32b44ffe2c87567e5684a9fdb96e0697b57d7fb\"" Oct 8 20:01:23.267014 systemd[1]: Started cri-containerd-5f424c3049904a4c44449bfcc32b44ffe2c87567e5684a9fdb96e0697b57d7fb.scope - libcontainer container 5f424c3049904a4c44449bfcc32b44ffe2c87567e5684a9fdb96e0697b57d7fb. Oct 8 20:01:23.305365 containerd[1475]: time="2024-10-08T20:01:23.305216938Z" level=info msg="StartContainer for \"5f424c3049904a4c44449bfcc32b44ffe2c87567e5684a9fdb96e0697b57d7fb\" returns successfully" Oct 8 20:01:23.877291 kubelet[2634]: E1008 20:01:23.877263 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:23.918646 kubelet[2634]: I1008 20:01:23.918547 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dl48c" podStartSLOduration=2.918507512 podStartE2EDuration="2.918507512s" podCreationTimestamp="2024-10-08 20:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:23.918397754 +0000 UTC m=+14.172005352" watchObservedRunningTime="2024-10-08 20:01:23.918507512 +0000 UTC m=+14.172115110" Oct 8 20:01:36.101071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217157112.mount: Deactivated successfully. Oct 8 20:01:39.561264 containerd[1475]: time="2024-10-08T20:01:39.561208743Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:39.561982 containerd[1475]: time="2024-10-08T20:01:39.561943162Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735355" Oct 8 20:01:39.563224 containerd[1475]: time="2024-10-08T20:01:39.563197656Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:39.564755 containerd[1475]: time="2024-10-08T20:01:39.564708463Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.349787551s" Oct 8 20:01:39.564794 containerd[1475]: time="2024-10-08T20:01:39.564755781Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 8 20:01:39.565449 containerd[1475]: time="2024-10-08T20:01:39.565409151Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 20:01:39.568556 containerd[1475]: time="2024-10-08T20:01:39.568458938Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:01:39.584252 containerd[1475]: time="2024-10-08T20:01:39.584207792Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\"" Oct 8 20:01:39.584811 containerd[1475]: time="2024-10-08T20:01:39.584779220Z" level=info msg="StartContainer for \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\"" Oct 8 20:01:39.625071 systemd[1]: Started cri-containerd-bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169.scope - libcontainer container bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169. Oct 8 20:01:39.681812 systemd[1]: cri-containerd-bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169.scope: Deactivated successfully. Oct 8 20:01:39.694519 containerd[1475]: time="2024-10-08T20:01:39.694472377Z" level=info msg="StartContainer for \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\" returns successfully" Oct 8 20:01:39.904279 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:42444.service - OpenSSH per-connection server daemon (10.0.0.1:42444). Oct 8 20:01:39.936988 kubelet[2634]: E1008 20:01:39.936950 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:40.510659 sshd[3078]: Accepted publickey for core from 10.0.0.1 port 42444 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:01:40.512128 sshd[3078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:40.515796 systemd-logind[1453]: New session 10 of user core. Oct 8 20:01:40.524994 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:01:40.580194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169-rootfs.mount: Deactivated successfully. Oct 8 20:01:40.858965 containerd[1475]: time="2024-10-08T20:01:40.858806664Z" level=info msg="shim disconnected" id=bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169 namespace=k8s.io Oct 8 20:01:40.858965 containerd[1475]: time="2024-10-08T20:01:40.858856185Z" level=warning msg="cleaning up after shim disconnected" id=bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169 namespace=k8s.io Oct 8 20:01:40.858965 containerd[1475]: time="2024-10-08T20:01:40.858865231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:40.861298 sshd[3078]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:40.865558 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:42444.service: Deactivated successfully. Oct 8 20:01:40.868661 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:01:40.869765 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:01:40.870667 systemd-logind[1453]: Removed session 10. Oct 8 20:01:40.939795 kubelet[2634]: E1008 20:01:40.939761 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:40.942049 containerd[1475]: time="2024-10-08T20:01:40.942011893Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:01:40.971901 containerd[1475]: time="2024-10-08T20:01:40.971740037Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\"" Oct 8 20:01:40.972636 containerd[1475]: time="2024-10-08T20:01:40.972581065Z" level=info msg="StartContainer for \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\"" Oct 8 20:01:41.009037 systemd[1]: Started cri-containerd-1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2.scope - libcontainer container 1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2. Oct 8 20:01:41.044637 containerd[1475]: time="2024-10-08T20:01:41.044519635Z" level=info msg="StartContainer for \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\" returns successfully" Oct 8 20:01:41.050405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:01:41.051073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:01:41.051148 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:01:41.059419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:01:41.059806 systemd[1]: cri-containerd-1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2.scope: Deactivated successfully. Oct 8 20:01:41.077058 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:01:41.081853 containerd[1475]: time="2024-10-08T20:01:41.081790329Z" level=info msg="shim disconnected" id=1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2 namespace=k8s.io Oct 8 20:01:41.081853 containerd[1475]: time="2024-10-08T20:01:41.081851502Z" level=warning msg="cleaning up after shim disconnected" id=1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2 namespace=k8s.io Oct 8 20:01:41.081978 containerd[1475]: time="2024-10-08T20:01:41.081861010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:41.580799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2-rootfs.mount: Deactivated successfully. Oct 8 20:01:41.943181 kubelet[2634]: E1008 20:01:41.943072 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:41.945673 containerd[1475]: time="2024-10-08T20:01:41.945620993Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:01:41.979819 containerd[1475]: time="2024-10-08T20:01:41.979744970Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\"" Oct 8 20:01:41.980501 containerd[1475]: time="2024-10-08T20:01:41.980470536Z" level=info msg="StartContainer for \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\"" Oct 8 20:01:42.021159 systemd[1]: Started cri-containerd-10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602.scope - libcontainer container 10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602. Oct 8 20:01:42.054668 containerd[1475]: time="2024-10-08T20:01:42.054618172Z" level=info msg="StartContainer for \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\" returns successfully" Oct 8 20:01:42.056937 systemd[1]: cri-containerd-10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602.scope: Deactivated successfully. Oct 8 20:01:42.089858 containerd[1475]: time="2024-10-08T20:01:42.089755654Z" level=info msg="shim disconnected" id=10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602 namespace=k8s.io Oct 8 20:01:42.089858 containerd[1475]: time="2024-10-08T20:01:42.089814093Z" level=warning msg="cleaning up after shim disconnected" id=10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602 namespace=k8s.io Oct 8 20:01:42.089858 containerd[1475]: time="2024-10-08T20:01:42.089824623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:42.580591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602-rootfs.mount: Deactivated successfully. Oct 8 20:01:42.670954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842781587.mount: Deactivated successfully. Oct 8 20:01:42.947495 kubelet[2634]: E1008 20:01:42.946950 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:42.950596 containerd[1475]: time="2024-10-08T20:01:42.950552597Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:01:43.600764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169794097.mount: Deactivated successfully. Oct 8 20:01:43.677074 containerd[1475]: time="2024-10-08T20:01:43.677012633Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\"" Oct 8 20:01:43.677950 containerd[1475]: time="2024-10-08T20:01:43.677668832Z" level=info msg="StartContainer for \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\"" Oct 8 20:01:43.704551 containerd[1475]: time="2024-10-08T20:01:43.704481636Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:43.705674 containerd[1475]: time="2024-10-08T20:01:43.705621123Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907205" Oct 8 20:01:43.706829 containerd[1475]: time="2024-10-08T20:01:43.706795444Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:01:43.707121 systemd[1]: Started cri-containerd-3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517.scope - libcontainer container 3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517. Oct 8 20:01:43.709189 containerd[1475]: time="2024-10-08T20:01:43.708937382Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.143496572s" Oct 8 20:01:43.709189 containerd[1475]: time="2024-10-08T20:01:43.708986403Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 8 20:01:43.713705 containerd[1475]: time="2024-10-08T20:01:43.713587608Z" level=info msg="CreateContainer within sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 20:01:43.735390 systemd[1]: cri-containerd-3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517.scope: Deactivated successfully. Oct 8 20:01:43.844942 containerd[1475]: time="2024-10-08T20:01:43.844854929Z" level=info msg="StartContainer for \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\" returns successfully" Oct 8 20:01:43.867826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517-rootfs.mount: Deactivated successfully. Oct 8 20:01:43.951544 kubelet[2634]: E1008 20:01:43.951511 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:44.175355 containerd[1475]: time="2024-10-08T20:01:44.175168064Z" level=info msg="shim disconnected" id=3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517 namespace=k8s.io Oct 8 20:01:44.175355 containerd[1475]: time="2024-10-08T20:01:44.175234788Z" level=warning msg="cleaning up after shim disconnected" id=3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517 namespace=k8s.io Oct 8 20:01:44.175355 containerd[1475]: time="2024-10-08T20:01:44.175245518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:01:44.179978 containerd[1475]: time="2024-10-08T20:01:44.179921748Z" level=info msg="CreateContainer within sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\"" Oct 8 20:01:44.180635 containerd[1475]: time="2024-10-08T20:01:44.180554043Z" level=info msg="StartContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\"" Oct 8 20:01:44.210153 systemd[1]: Started cri-containerd-97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb.scope - libcontainer container 97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb. Oct 8 20:01:44.238176 containerd[1475]: time="2024-10-08T20:01:44.238120487Z" level=info msg="StartContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" returns successfully" Oct 8 20:01:44.955022 kubelet[2634]: E1008 20:01:44.954959 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:44.957811 kubelet[2634]: E1008 20:01:44.957774 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:44.959679 containerd[1475]: time="2024-10-08T20:01:44.959638468Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:01:45.063944 kubelet[2634]: I1008 20:01:45.063869 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-sglc5" podStartSLOduration=2.577985785 podStartE2EDuration="23.063828468s" podCreationTimestamp="2024-10-08 20:01:22 +0000 UTC" firstStartedPulling="2024-10-08 20:01:23.223341927 +0000 UTC m=+13.476949525" lastFinishedPulling="2024-10-08 20:01:43.70918461 +0000 UTC m=+33.962792208" observedRunningTime="2024-10-08 20:01:45.022486799 +0000 UTC m=+35.276094397" watchObservedRunningTime="2024-10-08 20:01:45.063828468 +0000 UTC m=+35.317436066" Oct 8 20:01:45.064943 containerd[1475]: time="2024-10-08T20:01:45.064858433Z" level=info msg="CreateContainer within sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\"" Oct 8 20:01:45.066018 containerd[1475]: time="2024-10-08T20:01:45.065965181Z" level=info msg="StartContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\"" Oct 8 20:01:45.113207 systemd[1]: Started cri-containerd-2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472.scope - libcontainer container 2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472. Oct 8 20:01:45.147898 containerd[1475]: time="2024-10-08T20:01:45.147829504Z" level=info msg="StartContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" returns successfully" Oct 8 20:01:45.289110 kubelet[2634]: I1008 20:01:45.289076 2634 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:01:45.319976 kubelet[2634]: I1008 20:01:45.319920 2634 topology_manager.go:215] "Topology Admit Handler" podUID="e3327f3d-9926-480c-9f60-2ddf4fb6db8a" podNamespace="kube-system" podName="coredns-76f75df574-4khl8" Oct 8 20:01:45.321465 kubelet[2634]: I1008 20:01:45.321404 2634 topology_manager.go:215] "Topology Admit Handler" podUID="19bed8ce-1b31-4220-857c-eeb281d0be8f" podNamespace="kube-system" podName="coredns-76f75df574-4629r" Oct 8 20:01:45.331457 systemd[1]: Created slice kubepods-burstable-pode3327f3d_9926_480c_9f60_2ddf4fb6db8a.slice - libcontainer container kubepods-burstable-pode3327f3d_9926_480c_9f60_2ddf4fb6db8a.slice. Oct 8 20:01:45.343574 systemd[1]: Created slice kubepods-burstable-pod19bed8ce_1b31_4220_857c_eeb281d0be8f.slice - libcontainer container kubepods-burstable-pod19bed8ce_1b31_4220_857c_eeb281d0be8f.slice. Oct 8 20:01:45.399835 kubelet[2634]: I1008 20:01:45.399786 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gkc\" (UniqueName: \"kubernetes.io/projected/e3327f3d-9926-480c-9f60-2ddf4fb6db8a-kube-api-access-k7gkc\") pod \"coredns-76f75df574-4khl8\" (UID: \"e3327f3d-9926-480c-9f60-2ddf4fb6db8a\") " pod="kube-system/coredns-76f75df574-4khl8" Oct 8 20:01:45.399998 kubelet[2634]: I1008 20:01:45.399942 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vbh\" (UniqueName: \"kubernetes.io/projected/19bed8ce-1b31-4220-857c-eeb281d0be8f-kube-api-access-t9vbh\") pod \"coredns-76f75df574-4629r\" (UID: \"19bed8ce-1b31-4220-857c-eeb281d0be8f\") " pod="kube-system/coredns-76f75df574-4629r" Oct 8 20:01:45.400072 kubelet[2634]: I1008 20:01:45.400045 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3327f3d-9926-480c-9f60-2ddf4fb6db8a-config-volume\") pod \"coredns-76f75df574-4khl8\" (UID: \"e3327f3d-9926-480c-9f60-2ddf4fb6db8a\") " pod="kube-system/coredns-76f75df574-4khl8" Oct 8 20:01:45.400134 kubelet[2634]: I1008 20:01:45.400090 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19bed8ce-1b31-4220-857c-eeb281d0be8f-config-volume\") pod \"coredns-76f75df574-4629r\" (UID: \"19bed8ce-1b31-4220-857c-eeb281d0be8f\") " pod="kube-system/coredns-76f75df574-4629r" Oct 8 20:01:45.642477 kubelet[2634]: E1008 20:01:45.641125 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:45.642574 containerd[1475]: time="2024-10-08T20:01:45.642419157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4khl8,Uid:e3327f3d-9926-480c-9f60-2ddf4fb6db8a,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:45.648226 kubelet[2634]: E1008 20:01:45.648181 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:45.649138 containerd[1475]: time="2024-10-08T20:01:45.648832792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4629r,Uid:19bed8ce-1b31-4220-857c-eeb281d0be8f,Namespace:kube-system,Attempt:0,}" Oct 8 20:01:45.876085 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:57278.service - OpenSSH per-connection server daemon (10.0.0.1:57278). Oct 8 20:01:45.955327 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 57278 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:01:45.957326 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:45.962115 kubelet[2634]: E1008 20:01:45.962089 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:45.962651 kubelet[2634]: E1008 20:01:45.962479 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:45.962928 systemd-logind[1453]: New session 11 of user core. Oct 8 20:01:45.970093 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:01:46.025084 kubelet[2634]: I1008 20:01:46.024844 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-548kx" podStartSLOduration=8.673911424 podStartE2EDuration="25.024800149s" podCreationTimestamp="2024-10-08 20:01:21 +0000 UTC" firstStartedPulling="2024-10-08 20:01:23.214314014 +0000 UTC m=+13.467921612" lastFinishedPulling="2024-10-08 20:01:39.565202748 +0000 UTC m=+29.818810337" observedRunningTime="2024-10-08 20:01:46.024504538 +0000 UTC m=+36.278112146" watchObservedRunningTime="2024-10-08 20:01:46.024800149 +0000 UTC m=+36.278407747" Oct 8 20:01:46.089384 sshd[3450]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:46.097238 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:57278.service: Deactivated successfully. Oct 8 20:01:46.099924 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:01:46.102236 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:01:46.103260 systemd-logind[1453]: Removed session 11. Oct 8 20:01:46.964137 kubelet[2634]: E1008 20:01:46.964110 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:47.966197 kubelet[2634]: E1008 20:01:47.966162 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:48.702521 systemd-networkd[1402]: cilium_host: Link UP Oct 8 20:01:48.703463 systemd-networkd[1402]: cilium_net: Link UP Oct 8 20:01:48.703986 systemd-networkd[1402]: cilium_net: Gained carrier Oct 8 20:01:48.704233 systemd-networkd[1402]: cilium_host: Gained carrier Oct 8 20:01:48.808156 systemd-networkd[1402]: cilium_net: Gained IPv6LL Oct 8 20:01:48.811080 systemd-networkd[1402]: cilium_vxlan: Link UP Oct 8 20:01:48.811087 systemd-networkd[1402]: cilium_vxlan: Gained carrier Oct 8 20:01:48.928078 systemd-networkd[1402]: cilium_host: Gained IPv6LL Oct 8 20:01:49.037910 kernel: NET: Registered PF_ALG protocol family Oct 8 20:01:49.682927 systemd-networkd[1402]: lxc_health: Link UP Oct 8 20:01:49.683297 systemd-networkd[1402]: lxc_health: Gained carrier Oct 8 20:01:50.108134 systemd-networkd[1402]: lxc7eda821f7e68: Link UP Oct 8 20:01:50.117924 kernel: eth0: renamed from tmp17202 Oct 8 20:01:50.121747 systemd-networkd[1402]: lxcceec3fd318b9: Link UP Oct 8 20:01:50.135913 kernel: eth0: renamed from tmp68add Oct 8 20:01:50.143245 systemd-networkd[1402]: lxc7eda821f7e68: Gained carrier Oct 8 20:01:50.144479 systemd-networkd[1402]: lxcceec3fd318b9: Gained carrier Oct 8 20:01:50.688471 kubelet[2634]: E1008 20:01:50.688432 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:50.800110 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Oct 8 20:01:50.972165 kubelet[2634]: E1008 20:01:50.972014 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:51.101178 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:48452.service - OpenSSH per-connection server daemon (10.0.0.1:48452). Oct 8 20:01:51.184691 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 48452 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:01:51.186554 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:51.190799 systemd-logind[1453]: New session 12 of user core. Oct 8 20:01:51.198053 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:01:51.319808 sshd[3869]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:51.323592 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:48452.service: Deactivated successfully. Oct 8 20:01:51.325643 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:01:51.326360 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:01:51.327446 systemd-logind[1453]: Removed session 12. Oct 8 20:01:51.696077 systemd-networkd[1402]: lxc_health: Gained IPv6LL Oct 8 20:01:51.974276 kubelet[2634]: E1008 20:01:51.974029 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:52.016047 systemd-networkd[1402]: lxc7eda821f7e68: Gained IPv6LL Oct 8 20:01:52.016473 systemd-networkd[1402]: lxcceec3fd318b9: Gained IPv6LL Oct 8 20:01:54.076268 containerd[1475]: time="2024-10-08T20:01:54.075966960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:54.076268 containerd[1475]: time="2024-10-08T20:01:54.076197329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:54.076268 containerd[1475]: time="2024-10-08T20:01:54.076211966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:54.076719 containerd[1475]: time="2024-10-08T20:01:54.076354703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:54.096004 containerd[1475]: time="2024-10-08T20:01:54.095896637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:01:54.096361 containerd[1475]: time="2024-10-08T20:01:54.096256217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:01:54.097084 containerd[1475]: time="2024-10-08T20:01:54.097035401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:54.098239 containerd[1475]: time="2024-10-08T20:01:54.098182613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:01:54.102566 systemd[1]: Started cri-containerd-172024ecb5e3480ff2a21c92e7f20913d7580d26930a93c3eb371035aa998bc9.scope - libcontainer container 172024ecb5e3480ff2a21c92e7f20913d7580d26930a93c3eb371035aa998bc9. Oct 8 20:01:54.124089 systemd[1]: Started cri-containerd-68add37169ff7df688952e3cb8d35ce9df036fde5ac90e0530e4542915f9b9d7.scope - libcontainer container 68add37169ff7df688952e3cb8d35ce9df036fde5ac90e0530e4542915f9b9d7. Oct 8 20:01:54.128319 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:01:54.139316 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:01:54.158726 containerd[1475]: time="2024-10-08T20:01:54.158601659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4khl8,Uid:e3327f3d-9926-480c-9f60-2ddf4fb6db8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"172024ecb5e3480ff2a21c92e7f20913d7580d26930a93c3eb371035aa998bc9\"" Oct 8 20:01:54.160661 kubelet[2634]: E1008 20:01:54.160638 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:54.169314 containerd[1475]: time="2024-10-08T20:01:54.169233873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4629r,Uid:19bed8ce-1b31-4220-857c-eeb281d0be8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"68add37169ff7df688952e3cb8d35ce9df036fde5ac90e0530e4542915f9b9d7\"" Oct 8 20:01:54.170228 kubelet[2634]: E1008 20:01:54.170091 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:54.172402 containerd[1475]: time="2024-10-08T20:01:54.172258087Z" level=info msg="CreateContainer within sandbox \"68add37169ff7df688952e3cb8d35ce9df036fde5ac90e0530e4542915f9b9d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:01:54.172739 containerd[1475]: time="2024-10-08T20:01:54.172700112Z" level=info msg="CreateContainer within sandbox \"172024ecb5e3480ff2a21c92e7f20913d7580d26930a93c3eb371035aa998bc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:01:54.640472 containerd[1475]: time="2024-10-08T20:01:54.640416688Z" level=info msg="CreateContainer within sandbox \"68add37169ff7df688952e3cb8d35ce9df036fde5ac90e0530e4542915f9b9d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44c36751897bfab0aea0a611ab2920e2a17d275f0e6a87eef356e07597e592c7\"" Oct 8 20:01:54.641208 containerd[1475]: time="2024-10-08T20:01:54.641052465Z" level=info msg="StartContainer for \"44c36751897bfab0aea0a611ab2920e2a17d275f0e6a87eef356e07597e592c7\"" Oct 8 20:01:54.647377 containerd[1475]: time="2024-10-08T20:01:54.647329100Z" level=info msg="CreateContainer within sandbox \"172024ecb5e3480ff2a21c92e7f20913d7580d26930a93c3eb371035aa998bc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb4999db353211a10bb596ad7c3f6823afeddbeb757d8f6b07bd17f8121f0d32\"" Oct 8 20:01:54.648283 containerd[1475]: time="2024-10-08T20:01:54.648258163Z" level=info msg="StartContainer for \"eb4999db353211a10bb596ad7c3f6823afeddbeb757d8f6b07bd17f8121f0d32\"" Oct 8 20:01:54.669094 systemd[1]: Started cri-containerd-44c36751897bfab0aea0a611ab2920e2a17d275f0e6a87eef356e07597e592c7.scope - libcontainer container 44c36751897bfab0aea0a611ab2920e2a17d275f0e6a87eef356e07597e592c7. Oct 8 20:01:54.681024 systemd[1]: Started cri-containerd-eb4999db353211a10bb596ad7c3f6823afeddbeb757d8f6b07bd17f8121f0d32.scope - libcontainer container eb4999db353211a10bb596ad7c3f6823afeddbeb757d8f6b07bd17f8121f0d32. Oct 8 20:01:54.884634 containerd[1475]: time="2024-10-08T20:01:54.884579753Z" level=info msg="StartContainer for \"eb4999db353211a10bb596ad7c3f6823afeddbeb757d8f6b07bd17f8121f0d32\" returns successfully" Oct 8 20:01:54.884768 containerd[1475]: time="2024-10-08T20:01:54.884580725Z" level=info msg="StartContainer for \"44c36751897bfab0aea0a611ab2920e2a17d275f0e6a87eef356e07597e592c7\" returns successfully" Oct 8 20:01:54.993364 kubelet[2634]: E1008 20:01:54.993132 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:54.995030 kubelet[2634]: E1008 20:01:54.995006 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:55.526441 kubelet[2634]: I1008 20:01:55.526075 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4khl8" podStartSLOduration=33.52601943 podStartE2EDuration="33.52601943s" podCreationTimestamp="2024-10-08 20:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:55.525854952 +0000 UTC m=+45.779462550" watchObservedRunningTime="2024-10-08 20:01:55.52601943 +0000 UTC m=+45.779627028" Oct 8 20:01:55.720346 kubelet[2634]: I1008 20:01:55.720285 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4629r" podStartSLOduration=33.720247816 podStartE2EDuration="33.720247816s" podCreationTimestamp="2024-10-08 20:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:01:55.720071126 +0000 UTC m=+45.973678724" watchObservedRunningTime="2024-10-08 20:01:55.720247816 +0000 UTC m=+45.973855415" Oct 8 20:01:55.997421 kubelet[2634]: E1008 20:01:55.997155 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:55.997421 kubelet[2634]: E1008 20:01:55.997213 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:56.336355 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:48468.service - OpenSSH per-connection server daemon (10.0.0.1:48468). Oct 8 20:01:56.382632 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 48468 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:01:56.384075 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:01:56.388633 systemd-logind[1453]: New session 13 of user core. Oct 8 20:01:56.400088 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:01:56.504919 sshd[4061]: pam_unix(sshd:session): session closed for user core Oct 8 20:01:56.509253 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:48468.service: Deactivated successfully. Oct 8 20:01:56.511292 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:01:56.512123 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:01:56.513035 systemd-logind[1453]: Removed session 13. Oct 8 20:01:56.998583 kubelet[2634]: E1008 20:01:56.998526 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:01:56.999059 kubelet[2634]: E1008 20:01:56.998615 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:01.515510 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:34048.service - OpenSSH per-connection server daemon (10.0.0.1:34048). Oct 8 20:02:01.553063 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 34048 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:01.554488 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:01.558116 systemd-logind[1453]: New session 14 of user core. Oct 8 20:02:01.569009 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:02:01.677821 sshd[4081]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:01.688862 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:34048.service: Deactivated successfully. Oct 8 20:02:01.690710 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:02:01.692223 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:02:01.701163 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:34058.service - OpenSSH per-connection server daemon (10.0.0.1:34058). Oct 8 20:02:01.702072 systemd-logind[1453]: Removed session 14. Oct 8 20:02:01.736384 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 34058 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:01.737889 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:01.741807 systemd-logind[1453]: New session 15 of user core. Oct 8 20:02:01.750012 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:02:01.900628 sshd[4096]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:01.908343 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:34058.service: Deactivated successfully. Oct 8 20:02:01.910424 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:02:01.912275 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:02:01.915537 systemd-logind[1453]: Removed session 15. Oct 8 20:02:01.922231 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:34074.service - OpenSSH per-connection server daemon (10.0.0.1:34074). Oct 8 20:02:01.957600 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 34074 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:01.959002 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:01.962874 systemd-logind[1453]: New session 16 of user core. Oct 8 20:02:01.970028 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:02:02.077793 sshd[4110]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:02.083131 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:34074.service: Deactivated successfully. Oct 8 20:02:02.085678 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:02:02.086280 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:02:02.087247 systemd-logind[1453]: Removed session 16. Oct 8 20:02:07.089907 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:34086.service - OpenSSH per-connection server daemon (10.0.0.1:34086). Oct 8 20:02:07.131389 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 34086 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:07.133293 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:07.137412 systemd-logind[1453]: New session 17 of user core. Oct 8 20:02:07.153148 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:02:07.260747 sshd[4125]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:07.264714 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:34086.service: Deactivated successfully. Oct 8 20:02:07.266581 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:02:07.267269 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:02:07.268295 systemd-logind[1453]: Removed session 17. Oct 8 20:02:12.271969 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:50226.service - OpenSSH per-connection server daemon (10.0.0.1:50226). Oct 8 20:02:12.311071 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:12.313076 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:12.317061 systemd-logind[1453]: New session 18 of user core. Oct 8 20:02:12.328060 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:02:12.449899 sshd[4142]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:12.454598 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:50226.service: Deactivated successfully. Oct 8 20:02:12.457262 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:02:12.458043 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:02:12.458990 systemd-logind[1453]: Removed session 18. Oct 8 20:02:17.462958 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:50228.service - OpenSSH per-connection server daemon (10.0.0.1:50228). Oct 8 20:02:17.502819 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 50228 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:17.504457 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:17.508798 systemd-logind[1453]: New session 19 of user core. Oct 8 20:02:17.517023 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:02:17.632130 sshd[4157]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:17.644638 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:50228.service: Deactivated successfully. Oct 8 20:02:17.647084 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:02:17.649012 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:02:17.650732 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:50240.service - OpenSSH per-connection server daemon (10.0.0.1:50240). Oct 8 20:02:17.651690 systemd-logind[1453]: Removed session 19. Oct 8 20:02:17.693046 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 50240 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:17.694816 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:17.699697 systemd-logind[1453]: New session 20 of user core. Oct 8 20:02:17.707064 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:02:18.033478 sshd[4171]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:18.045016 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:50240.service: Deactivated successfully. Oct 8 20:02:18.046730 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:02:18.048447 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:02:18.053138 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). Oct 8 20:02:18.054062 systemd-logind[1453]: Removed session 20. Oct 8 20:02:18.095465 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:18.097672 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:18.102091 systemd-logind[1453]: New session 21 of user core. Oct 8 20:02:18.112110 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:02:21.687236 sshd[4183]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:21.703169 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:50246.service: Deactivated successfully. Oct 8 20:02:21.705066 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:02:21.706746 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:02:21.715207 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Oct 8 20:02:21.716278 systemd-logind[1453]: Removed session 21. Oct 8 20:02:21.751387 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:21.753574 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:21.758761 systemd-logind[1453]: New session 22 of user core. Oct 8 20:02:21.766165 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:02:22.496599 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:44310.service - OpenSSH per-connection server daemon (10.0.0.1:44310). Oct 8 20:02:22.539981 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:22.577238 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:22.583720 systemd-logind[1453]: New session 23 of user core. Oct 8 20:02:22.595180 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:02:22.682995 sshd[4213]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:22.687711 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:44302.service: Deactivated successfully. Oct 8 20:02:22.689943 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:02:22.690857 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:02:22.691862 systemd-logind[1453]: Removed session 22. Oct 8 20:02:23.066971 sshd[4223]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:23.071069 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:44310.service: Deactivated successfully. Oct 8 20:02:23.073069 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:02:23.073634 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:02:23.074536 systemd-logind[1453]: Removed session 23. Oct 8 20:02:23.844102 kubelet[2634]: E1008 20:02:23.844061 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:25.844586 kubelet[2634]: E1008 20:02:25.844464 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:26.843921 kubelet[2634]: E1008 20:02:26.843855 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:27.866297 systemd[1]: Started sshd@23-10.0.0.96:22-10.0.0.1:44316.service - OpenSSH per-connection server daemon (10.0.0.1:44316). Oct 8 20:02:27.908804 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 44316 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:27.910666 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:27.915155 systemd-logind[1453]: New session 24 of user core. Oct 8 20:02:27.928115 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:02:28.062233 sshd[4243]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:28.066639 systemd[1]: sshd@23-10.0.0.96:22-10.0.0.1:44316.service: Deactivated successfully. Oct 8 20:02:28.068705 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:02:28.069320 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:02:28.070304 systemd-logind[1453]: Removed session 24. Oct 8 20:02:33.076534 systemd[1]: Started sshd@24-10.0.0.96:22-10.0.0.1:60926.service - OpenSSH per-connection server daemon (10.0.0.1:60926). Oct 8 20:02:33.115183 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 60926 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:33.116813 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:33.121092 systemd-logind[1453]: New session 25 of user core. Oct 8 20:02:33.131994 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:02:33.234364 sshd[4257]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:33.238375 systemd[1]: sshd@24-10.0.0.96:22-10.0.0.1:60926.service: Deactivated successfully. Oct 8 20:02:33.240513 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:02:33.241104 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:02:33.241970 systemd-logind[1453]: Removed session 25. Oct 8 20:02:38.250356 systemd[1]: Started sshd@25-10.0.0.96:22-10.0.0.1:60930.service - OpenSSH per-connection server daemon (10.0.0.1:60930). Oct 8 20:02:38.295456 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 60930 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:38.297519 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:38.302298 systemd-logind[1453]: New session 26 of user core. Oct 8 20:02:38.314012 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:02:38.438046 sshd[4274]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:38.442613 systemd[1]: sshd@25-10.0.0.96:22-10.0.0.1:60930.service: Deactivated successfully. Oct 8 20:02:38.444642 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:02:38.445446 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:02:38.446404 systemd-logind[1453]: Removed session 26. Oct 8 20:02:38.844304 kubelet[2634]: E1008 20:02:38.844230 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:43.454054 systemd[1]: Started sshd@26-10.0.0.96:22-10.0.0.1:37170.service - OpenSSH per-connection server daemon (10.0.0.1:37170). Oct 8 20:02:43.510166 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 37170 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:43.511913 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:43.517030 systemd-logind[1453]: New session 27 of user core. Oct 8 20:02:43.526144 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 20:02:43.639659 sshd[4288]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:43.644295 systemd[1]: sshd@26-10.0.0.96:22-10.0.0.1:37170.service: Deactivated successfully. Oct 8 20:02:43.646754 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 20:02:43.647499 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Oct 8 20:02:43.648466 systemd-logind[1453]: Removed session 27. Oct 8 20:02:48.650673 systemd[1]: Started sshd@27-10.0.0.96:22-10.0.0.1:37184.service - OpenSSH per-connection server daemon (10.0.0.1:37184). Oct 8 20:02:48.690022 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 37184 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:48.691862 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:48.696133 systemd-logind[1453]: New session 28 of user core. Oct 8 20:02:48.703054 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 20:02:48.809292 sshd[4302]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:48.822376 systemd[1]: sshd@27-10.0.0.96:22-10.0.0.1:37184.service: Deactivated successfully. Oct 8 20:02:48.824806 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 20:02:48.826829 systemd-logind[1453]: Session 28 logged out. Waiting for processes to exit. Oct 8 20:02:48.841655 systemd[1]: Started sshd@28-10.0.0.96:22-10.0.0.1:37196.service - OpenSSH per-connection server daemon (10.0.0.1:37196). Oct 8 20:02:48.842788 systemd-logind[1453]: Removed session 28. Oct 8 20:02:48.877703 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 37196 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:48.879360 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:48.884725 systemd-logind[1453]: New session 29 of user core. Oct 8 20:02:48.892124 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 20:02:50.243128 containerd[1475]: time="2024-10-08T20:02:50.243035483Z" level=info msg="StopContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" with timeout 30 (s)" Oct 8 20:02:50.244604 containerd[1475]: time="2024-10-08T20:02:50.244546288Z" level=info msg="Stop container \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" with signal terminated" Oct 8 20:02:50.260017 systemd[1]: run-containerd-runc-k8s.io-2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472-runc.BkfPzI.mount: Deactivated successfully. Oct 8 20:02:50.266747 systemd[1]: cri-containerd-97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb.scope: Deactivated successfully. Oct 8 20:02:50.289694 containerd[1475]: time="2024-10-08T20:02:50.289640519Z" level=info msg="StopContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" with timeout 2 (s)" Oct 8 20:02:50.289943 containerd[1475]: time="2024-10-08T20:02:50.289858891Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:02:50.289943 containerd[1475]: time="2024-10-08T20:02:50.289910649Z" level=info msg="Stop container \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" with signal terminated" Oct 8 20:02:50.292616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb-rootfs.mount: Deactivated successfully. Oct 8 20:02:50.296923 systemd-networkd[1402]: lxc_health: Link DOWN Oct 8 20:02:50.296934 systemd-networkd[1402]: lxc_health: Lost carrier Oct 8 20:02:50.302359 containerd[1475]: time="2024-10-08T20:02:50.302286137Z" level=info msg="shim disconnected" id=97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb namespace=k8s.io Oct 8 20:02:50.302359 containerd[1475]: time="2024-10-08T20:02:50.302344406Z" level=warning msg="cleaning up after shim disconnected" id=97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb namespace=k8s.io Oct 8 20:02:50.302359 containerd[1475]: time="2024-10-08T20:02:50.302355388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:50.321248 containerd[1475]: time="2024-10-08T20:02:50.321203127Z" level=info msg="StopContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" returns successfully" Oct 8 20:02:50.321993 containerd[1475]: time="2024-10-08T20:02:50.321960979Z" level=info msg="StopPodSandbox for \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\"" Oct 8 20:02:50.322037 containerd[1475]: time="2024-10-08T20:02:50.322009631Z" level=info msg="Container to stop \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.324195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510-shm.mount: Deactivated successfully. Oct 8 20:02:50.325088 systemd[1]: cri-containerd-2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472.scope: Deactivated successfully. Oct 8 20:02:50.325479 systemd[1]: cri-containerd-2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472.scope: Consumed 7.447s CPU time. Oct 8 20:02:50.339385 systemd[1]: cri-containerd-4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510.scope: Deactivated successfully. Oct 8 20:02:50.346469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472-rootfs.mount: Deactivated successfully. Oct 8 20:02:50.353546 containerd[1475]: time="2024-10-08T20:02:50.353461069Z" level=info msg="shim disconnected" id=2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472 namespace=k8s.io Oct 8 20:02:50.353546 containerd[1475]: time="2024-10-08T20:02:50.353532785Z" level=warning msg="cleaning up after shim disconnected" id=2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472 namespace=k8s.io Oct 8 20:02:50.353546 containerd[1475]: time="2024-10-08T20:02:50.353548725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:50.369457 containerd[1475]: time="2024-10-08T20:02:50.369385434Z" level=info msg="shim disconnected" id=4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510 namespace=k8s.io Oct 8 20:02:50.369457 containerd[1475]: time="2024-10-08T20:02:50.369450897Z" level=warning msg="cleaning up after shim disconnected" id=4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510 namespace=k8s.io Oct 8 20:02:50.369457 containerd[1475]: time="2024-10-08T20:02:50.369462009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:50.370430 containerd[1475]: time="2024-10-08T20:02:50.370386135Z" level=info msg="StopContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" returns successfully" Oct 8 20:02:50.371081 containerd[1475]: time="2024-10-08T20:02:50.371056953Z" level=info msg="StopPodSandbox for \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\"" Oct 8 20:02:50.371139 containerd[1475]: time="2024-10-08T20:02:50.371084976Z" level=info msg="Container to stop \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.371139 containerd[1475]: time="2024-10-08T20:02:50.371100254Z" level=info msg="Container to stop \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.371139 containerd[1475]: time="2024-10-08T20:02:50.371125703Z" level=info msg="Container to stop \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.371139 containerd[1475]: time="2024-10-08T20:02:50.371134880Z" level=info msg="Container to stop \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.371295 containerd[1475]: time="2024-10-08T20:02:50.371143777Z" level=info msg="Container to stop \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:02:50.377833 systemd[1]: cri-containerd-45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d.scope: Deactivated successfully. Oct 8 20:02:50.393382 containerd[1475]: time="2024-10-08T20:02:50.393183990Z" level=info msg="TearDown network for sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" successfully" Oct 8 20:02:50.393382 containerd[1475]: time="2024-10-08T20:02:50.393233574Z" level=info msg="StopPodSandbox for \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" returns successfully" Oct 8 20:02:50.405189 containerd[1475]: time="2024-10-08T20:02:50.405092465Z" level=info msg="shim disconnected" id=45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d namespace=k8s.io Oct 8 20:02:50.405408 containerd[1475]: time="2024-10-08T20:02:50.405182415Z" level=warning msg="cleaning up after shim disconnected" id=45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d namespace=k8s.io Oct 8 20:02:50.405408 containerd[1475]: time="2024-10-08T20:02:50.405213513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:50.420966 containerd[1475]: time="2024-10-08T20:02:50.420907903Z" level=info msg="TearDown network for sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" successfully" Oct 8 20:02:50.420966 containerd[1475]: time="2024-10-08T20:02:50.420952307Z" level=info msg="StopPodSandbox for \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" returns successfully" Oct 8 20:02:50.492705 kubelet[2634]: I1008 20:02:50.492652 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-net\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.492705 kubelet[2634]: I1008 20:02:50.492698 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-hostproc\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.492705 kubelet[2634]: I1008 20:02:50.492721 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cni-path\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497374 kubelet[2634]: I1008 20:02:50.492747 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ded72d2b-9f96-4e24-b97f-3de805d15af6-clustermesh-secrets\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497374 kubelet[2634]: I1008 20:02:50.492772 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-etc-cni-netd\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497374 kubelet[2634]: I1008 20:02:50.492782 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.497374 kubelet[2634]: I1008 20:02:50.492793 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-lib-modules\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497374 kubelet[2634]: I1008 20:02:50.492811 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492837 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-hostproc" (OuterVolumeSpecName: "hostproc") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492842 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-bpf-maps\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492855 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492872 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-config-path\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492916 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-cgroup\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497590 kubelet[2634]: I1008 20:02:50.492939 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-kernel\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.492964 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmst7\" (UniqueName: \"kubernetes.io/projected/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-kube-api-access-mmst7\") pod \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\" (UID: \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.492990 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skq2m\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.493011 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-xtables-lock\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.493033 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-run\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.493057 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-hubble-tls\") pod \"ded72d2b-9f96-4e24-b97f-3de805d15af6\" (UID: \"ded72d2b-9f96-4e24-b97f-3de805d15af6\") " Oct 8 20:02:50.497811 kubelet[2634]: I1008 20:02:50.493079 2634 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-cilium-config-path\") pod \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\" (UID: \"4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2\") " Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.493113 2634 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.493127 2634 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.493140 2634 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.493164 2634 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.493286 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cni-path" (OuterVolumeSpecName: "cni-path") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498194 kubelet[2634]: I1008 20:02:50.494545 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498404 kubelet[2634]: I1008 20:02:50.494577 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498404 kubelet[2634]: I1008 20:02:50.494703 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498404 kubelet[2634]: I1008 20:02:50.494734 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498404 kubelet[2634]: I1008 20:02:50.494757 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:02:50.498404 kubelet[2634]: I1008 20:02:50.497144 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" (UID: "4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:02:50.498570 kubelet[2634]: I1008 20:02:50.497655 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-kube-api-access-mmst7" (OuterVolumeSpecName: "kube-api-access-mmst7") pod "4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" (UID: "4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2"). InnerVolumeSpecName "kube-api-access-mmst7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:50.498570 kubelet[2634]: I1008 20:02:50.498395 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded72d2b-9f96-4e24-b97f-3de805d15af6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:02:50.499240 kubelet[2634]: I1008 20:02:50.499207 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m" (OuterVolumeSpecName: "kube-api-access-skq2m") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "kube-api-access-skq2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:50.499359 kubelet[2634]: I1008 20:02:50.499319 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:02:50.501065 kubelet[2634]: I1008 20:02:50.501004 2634 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ded72d2b-9f96-4e24-b97f-3de805d15af6" (UID: "ded72d2b-9f96-4e24-b97f-3de805d15af6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593264 2634 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-skq2m\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-kube-api-access-skq2m\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593296 2634 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ded72d2b-9f96-4e24-b97f-3de805d15af6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593311 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593324 2634 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593338 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593351 2634 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593364 2634 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ded72d2b-9f96-4e24-b97f-3de805d15af6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593364 kubelet[2634]: I1008 20:02:50.593377 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593733 kubelet[2634]: I1008 20:02:50.593390 2634 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593733 kubelet[2634]: I1008 20:02:50.593402 2634 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593733 kubelet[2634]: I1008 20:02:50.593415 2634 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ded72d2b-9f96-4e24-b97f-3de805d15af6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:50.593733 kubelet[2634]: I1008 20:02:50.593427 2634 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mmst7\" (UniqueName: \"kubernetes.io/projected/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2-kube-api-access-mmst7\") on node \"localhost\" DevicePath \"\"" Oct 8 20:02:51.095349 kubelet[2634]: I1008 20:02:51.095232 2634 scope.go:117] "RemoveContainer" containerID="97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb" Oct 8 20:02:51.097331 containerd[1475]: time="2024-10-08T20:02:51.097288250Z" level=info msg="RemoveContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\"" Oct 8 20:02:51.102474 systemd[1]: Removed slice kubepods-besteffort-pod4fc06297_dd6c_4cfa_bb3c_74c3085a8ba2.slice - libcontainer container kubepods-besteffort-pod4fc06297_dd6c_4cfa_bb3c_74c3085a8ba2.slice. Oct 8 20:02:51.107101 systemd[1]: Removed slice kubepods-burstable-podded72d2b_9f96_4e24_b97f_3de805d15af6.slice - libcontainer container kubepods-burstable-podded72d2b_9f96_4e24_b97f_3de805d15af6.slice. Oct 8 20:02:51.107236 systemd[1]: kubepods-burstable-podded72d2b_9f96_4e24_b97f_3de805d15af6.slice: Consumed 7.554s CPU time. Oct 8 20:02:51.218897 containerd[1475]: time="2024-10-08T20:02:51.218837185Z" level=info msg="RemoveContainer for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" returns successfully" Oct 8 20:02:51.221755 kubelet[2634]: I1008 20:02:51.221704 2634 scope.go:117] "RemoveContainer" containerID="97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb" Oct 8 20:02:51.225084 containerd[1475]: time="2024-10-08T20:02:51.225031179Z" level=error msg="ContainerStatus for \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\": not found" Oct 8 20:02:51.225301 kubelet[2634]: E1008 20:02:51.225268 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\": not found" containerID="97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb" Oct 8 20:02:51.225393 kubelet[2634]: I1008 20:02:51.225370 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb"} err="failed to get container status \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\": rpc error: code = NotFound desc = an error occurred when try to find container \"97b1e6b751ef596aadc776cbc86fc108f703e6c21f013a537761eb0810c43fcb\": not found" Oct 8 20:02:51.225393 kubelet[2634]: I1008 20:02:51.225391 2634 scope.go:117] "RemoveContainer" containerID="2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472" Oct 8 20:02:51.226409 containerd[1475]: time="2024-10-08T20:02:51.226384025Z" level=info msg="RemoveContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\"" Oct 8 20:02:51.256197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d-rootfs.mount: Deactivated successfully. Oct 8 20:02:51.256323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510-rootfs.mount: Deactivated successfully. Oct 8 20:02:51.256399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d-shm.mount: Deactivated successfully. Oct 8 20:02:51.256477 systemd[1]: var-lib-kubelet-pods-4fc06297\x2ddd6c\x2d4cfa\x2dbb3c\x2d74c3085a8ba2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmst7.mount: Deactivated successfully. Oct 8 20:02:51.256562 systemd[1]: var-lib-kubelet-pods-ded72d2b\x2d9f96\x2d4e24\x2db97f\x2d3de805d15af6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskq2m.mount: Deactivated successfully. Oct 8 20:02:51.256642 systemd[1]: var-lib-kubelet-pods-ded72d2b\x2d9f96\x2d4e24\x2db97f\x2d3de805d15af6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 20:02:51.256717 systemd[1]: var-lib-kubelet-pods-ded72d2b\x2d9f96\x2d4e24\x2db97f\x2d3de805d15af6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 20:02:51.364995 containerd[1475]: time="2024-10-08T20:02:51.364847862Z" level=info msg="RemoveContainer for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" returns successfully" Oct 8 20:02:51.365458 kubelet[2634]: I1008 20:02:51.365180 2634 scope.go:117] "RemoveContainer" containerID="3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517" Oct 8 20:02:51.366532 containerd[1475]: time="2024-10-08T20:02:51.366503460Z" level=info msg="RemoveContainer for \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\"" Oct 8 20:02:51.374489 containerd[1475]: time="2024-10-08T20:02:51.374430238Z" level=info msg="RemoveContainer for \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\" returns successfully" Oct 8 20:02:51.374780 kubelet[2634]: I1008 20:02:51.374743 2634 scope.go:117] "RemoveContainer" containerID="10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602" Oct 8 20:02:51.375940 containerd[1475]: time="2024-10-08T20:02:51.375898232Z" level=info msg="RemoveContainer for \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\"" Oct 8 20:02:51.380708 containerd[1475]: time="2024-10-08T20:02:51.380679036Z" level=info msg="RemoveContainer for \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\" returns successfully" Oct 8 20:02:51.380908 kubelet[2634]: I1008 20:02:51.380867 2634 scope.go:117] "RemoveContainer" containerID="1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2" Oct 8 20:02:51.381964 containerd[1475]: time="2024-10-08T20:02:51.381932444Z" level=info msg="RemoveContainer for \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\"" Oct 8 20:02:51.386371 containerd[1475]: time="2024-10-08T20:02:51.386327039Z" level=info msg="RemoveContainer for \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\" returns successfully" Oct 8 20:02:51.386568 kubelet[2634]: I1008 20:02:51.386546 2634 scope.go:117] "RemoveContainer" containerID="bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169" Oct 8 20:02:51.387828 containerd[1475]: time="2024-10-08T20:02:51.387766909Z" level=info msg="RemoveContainer for \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\"" Oct 8 20:02:51.391500 containerd[1475]: time="2024-10-08T20:02:51.391456180Z" level=info msg="RemoveContainer for \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\" returns successfully" Oct 8 20:02:51.391661 kubelet[2634]: I1008 20:02:51.391637 2634 scope.go:117] "RemoveContainer" containerID="2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472" Oct 8 20:02:51.391844 containerd[1475]: time="2024-10-08T20:02:51.391812333Z" level=error msg="ContainerStatus for \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\": not found" Oct 8 20:02:51.392014 kubelet[2634]: E1008 20:02:51.391961 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\": not found" containerID="2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472" Oct 8 20:02:51.392014 kubelet[2634]: I1008 20:02:51.392004 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472"} err="failed to get container status \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f999ab717a417646b920fbd855050a3a3680d96b275236704909ef123be0472\": not found" Oct 8 20:02:51.392112 kubelet[2634]: I1008 20:02:51.392019 2634 scope.go:117] "RemoveContainer" containerID="3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517" Oct 8 20:02:51.392228 containerd[1475]: time="2024-10-08T20:02:51.392202560Z" level=error msg="ContainerStatus for \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\": not found" Oct 8 20:02:51.392389 kubelet[2634]: E1008 20:02:51.392351 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\": not found" containerID="3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517" Oct 8 20:02:51.392439 kubelet[2634]: I1008 20:02:51.392410 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517"} err="failed to get container status \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f176a3de9b99e87e33ffcfa2923dc1d8616014b6ed02ae9d71bebd1a861b517\": not found" Oct 8 20:02:51.392439 kubelet[2634]: I1008 20:02:51.392434 2634 scope.go:117] "RemoveContainer" containerID="10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602" Oct 8 20:02:51.392661 containerd[1475]: time="2024-10-08T20:02:51.392624297Z" level=error msg="ContainerStatus for \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\": not found" Oct 8 20:02:51.392828 kubelet[2634]: E1008 20:02:51.392780 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\": not found" containerID="10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602" Oct 8 20:02:51.392828 kubelet[2634]: I1008 20:02:51.392810 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602"} err="failed to get container status \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\": rpc error: code = NotFound desc = an error occurred when try to find container \"10f9942fe8c32fd2e248ef07be0f3cd3def21072b8336a31165961c69b188602\": not found" Oct 8 20:02:51.392828 kubelet[2634]: I1008 20:02:51.392821 2634 scope.go:117] "RemoveContainer" containerID="1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2" Oct 8 20:02:51.393174 containerd[1475]: time="2024-10-08T20:02:51.393109734Z" level=error msg="ContainerStatus for \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\": not found" Oct 8 20:02:51.393308 kubelet[2634]: E1008 20:02:51.393288 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\": not found" containerID="1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2" Oct 8 20:02:51.393368 kubelet[2634]: I1008 20:02:51.393320 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2"} err="failed to get container status \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1543e03611ae21438f9cf000bb5cef788af4090f24acfd814201644b9cc8d8a2\": not found" Oct 8 20:02:51.393368 kubelet[2634]: I1008 20:02:51.393333 2634 scope.go:117] "RemoveContainer" containerID="bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169" Oct 8 20:02:51.393521 containerd[1475]: time="2024-10-08T20:02:51.393491687Z" level=error msg="ContainerStatus for \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\": not found" Oct 8 20:02:51.393652 kubelet[2634]: E1008 20:02:51.393628 2634 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\": not found" containerID="bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169" Oct 8 20:02:51.393684 kubelet[2634]: I1008 20:02:51.393661 2634 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169"} err="failed to get container status \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd60e3d4fd97b964c2a7ecddd2bb6f5d2bb9cb0db248f92dc20eb215a930a169\": not found" Oct 8 20:02:51.845711 kubelet[2634]: I1008 20:02:51.845678 2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" path="/var/lib/kubelet/pods/4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2/volumes" Oct 8 20:02:51.847790 kubelet[2634]: I1008 20:02:51.847762 2634 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" path="/var/lib/kubelet/pods/ded72d2b-9f96-4e24-b97f-3de805d15af6/volumes" Oct 8 20:02:52.199443 sshd[4317]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:52.210174 systemd[1]: sshd@28-10.0.0.96:22-10.0.0.1:37196.service: Deactivated successfully. Oct 8 20:02:52.212148 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 20:02:52.213800 systemd-logind[1453]: Session 29 logged out. Waiting for processes to exit. Oct 8 20:02:52.222213 systemd[1]: Started sshd@29-10.0.0.96:22-10.0.0.1:43050.service - OpenSSH per-connection server daemon (10.0.0.1:43050). Oct 8 20:02:52.223370 systemd-logind[1453]: Removed session 29. Oct 8 20:02:52.262472 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 43050 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:52.264433 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:52.269833 systemd-logind[1453]: New session 30 of user core. Oct 8 20:02:52.278162 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 8 20:02:53.416138 sshd[4485]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:53.427083 systemd[1]: sshd@29-10.0.0.96:22-10.0.0.1:43050.service: Deactivated successfully. Oct 8 20:02:53.429021 systemd[1]: session-30.scope: Deactivated successfully. Oct 8 20:02:53.430724 systemd-logind[1453]: Session 30 logged out. Waiting for processes to exit. Oct 8 20:02:53.438143 systemd[1]: Started sshd@30-10.0.0.96:22-10.0.0.1:43066.service - OpenSSH per-connection server daemon (10.0.0.1:43066). Oct 8 20:02:53.439200 systemd-logind[1453]: Removed session 30. Oct 8 20:02:53.475134 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 43066 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:53.476800 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:53.481297 systemd-logind[1453]: New session 31 of user core. Oct 8 20:02:53.498071 systemd[1]: Started session-31.scope - Session 31 of User core. Oct 8 20:02:53.550512 sshd[4500]: pam_unix(sshd:session): session closed for user core Oct 8 20:02:53.563423 systemd[1]: sshd@30-10.0.0.96:22-10.0.0.1:43066.service: Deactivated successfully. Oct 8 20:02:53.565562 systemd[1]: session-31.scope: Deactivated successfully. Oct 8 20:02:53.567090 systemd-logind[1453]: Session 31 logged out. Waiting for processes to exit. Oct 8 20:02:53.577273 systemd[1]: Started sshd@31-10.0.0.96:22-10.0.0.1:43072.service - OpenSSH per-connection server daemon (10.0.0.1:43072). Oct 8 20:02:53.578252 systemd-logind[1453]: Removed session 31. Oct 8 20:02:53.613614 sshd[4508]: Accepted publickey for core from 10.0.0.1 port 43072 ssh2: RSA SHA256:OJWXIxWV5/ezNshFugVkIfLhrnHf7T3OS94qlXwAt6w Oct 8 20:02:53.615256 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:02:53.619082 systemd-logind[1453]: New session 32 of user core. Oct 8 20:02:53.624067 systemd[1]: Started session-32.scope - Session 32 of User core. Oct 8 20:02:53.652623 kubelet[2634]: I1008 20:02:53.651633 2634 topology_manager.go:215] "Topology Admit Handler" podUID="3da51a1a-b388-4eaa-8d58-b0a932de0d7a" podNamespace="kube-system" podName="cilium-9gc8z" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651706 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="mount-cgroup" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651715 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" containerName="cilium-operator" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651723 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="cilium-agent" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651734 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="apply-sysctl-overwrites" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651740 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="mount-bpf-fs" Oct 8 20:02:53.652623 kubelet[2634]: E1008 20:02:53.651747 2634 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="clean-cilium-state" Oct 8 20:02:53.652623 kubelet[2634]: I1008 20:02:53.651765 2634 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc06297-dd6c-4cfa-bb3c-74c3085a8ba2" containerName="cilium-operator" Oct 8 20:02:53.652623 kubelet[2634]: I1008 20:02:53.651772 2634 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded72d2b-9f96-4e24-b97f-3de805d15af6" containerName="cilium-agent" Oct 8 20:02:53.664848 systemd[1]: Created slice kubepods-burstable-pod3da51a1a_b388_4eaa_8d58_b0a932de0d7a.slice - libcontainer container kubepods-burstable-pod3da51a1a_b388_4eaa_8d58_b0a932de0d7a.slice. Oct 8 20:02:53.710009 kubelet[2634]: I1008 20:02:53.709767 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-clustermesh-secrets\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710009 kubelet[2634]: I1008 20:02:53.709815 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-cilium-config-path\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710009 kubelet[2634]: I1008 20:02:53.709838 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-etc-cni-netd\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710009 kubelet[2634]: I1008 20:02:53.709863 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-hostproc\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710208 kubelet[2634]: I1008 20:02:53.710022 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-cilium-cgroup\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710208 kubelet[2634]: I1008 20:02:53.710098 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-lib-modules\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710208 kubelet[2634]: I1008 20:02:53.710139 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-host-proc-sys-net\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710208 kubelet[2634]: I1008 20:02:53.710181 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-cilium-ipsec-secrets\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710294 kubelet[2634]: I1008 20:02:53.710215 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnx8b\" (UniqueName: \"kubernetes.io/projected/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-kube-api-access-jnx8b\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710294 kubelet[2634]: I1008 20:02:53.710239 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-bpf-maps\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710294 kubelet[2634]: I1008 20:02:53.710267 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-cni-path\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710294 kubelet[2634]: I1008 20:02:53.710287 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-host-proc-sys-kernel\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710388 kubelet[2634]: I1008 20:02:53.710306 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-hubble-tls\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710388 kubelet[2634]: I1008 20:02:53.710337 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-cilium-run\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.710388 kubelet[2634]: I1008 20:02:53.710357 2634 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da51a1a-b388-4eaa-8d58-b0a932de0d7a-xtables-lock\") pod \"cilium-9gc8z\" (UID: \"3da51a1a-b388-4eaa-8d58-b0a932de0d7a\") " pod="kube-system/cilium-9gc8z" Oct 8 20:02:53.969945 kubelet[2634]: E1008 20:02:53.969755 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:53.970652 containerd[1475]: time="2024-10-08T20:02:53.970380766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gc8z,Uid:3da51a1a-b388-4eaa-8d58-b0a932de0d7a,Namespace:kube-system,Attempt:0,}" Oct 8 20:02:53.996647 containerd[1475]: time="2024-10-08T20:02:53.996530434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:02:53.996647 containerd[1475]: time="2024-10-08T20:02:53.996601469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:02:53.996647 containerd[1475]: time="2024-10-08T20:02:53.996615976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:53.996852 containerd[1475]: time="2024-10-08T20:02:53.996732997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:02:54.019198 systemd[1]: Started cri-containerd-13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1.scope - libcontainer container 13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1. Oct 8 20:02:54.046854 containerd[1475]: time="2024-10-08T20:02:54.046806335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9gc8z,Uid:3da51a1a-b388-4eaa-8d58-b0a932de0d7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\"" Oct 8 20:02:54.047701 kubelet[2634]: E1008 20:02:54.047664 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:54.050109 containerd[1475]: time="2024-10-08T20:02:54.050053649Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:02:54.073466 containerd[1475]: time="2024-10-08T20:02:54.073388023Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8\"" Oct 8 20:02:54.074206 containerd[1475]: time="2024-10-08T20:02:54.074148960Z" level=info msg="StartContainer for \"1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8\"" Oct 8 20:02:54.114135 systemd[1]: Started cri-containerd-1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8.scope - libcontainer container 1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8. Oct 8 20:02:54.142689 containerd[1475]: time="2024-10-08T20:02:54.142626699Z" level=info msg="StartContainer for \"1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8\" returns successfully" Oct 8 20:02:54.152951 systemd[1]: cri-containerd-1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8.scope: Deactivated successfully. Oct 8 20:02:54.194188 containerd[1475]: time="2024-10-08T20:02:54.194103368Z" level=info msg="shim disconnected" id=1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8 namespace=k8s.io Oct 8 20:02:54.194188 containerd[1475]: time="2024-10-08T20:02:54.194167119Z" level=warning msg="cleaning up after shim disconnected" id=1c18fbec706b1f8dfb2e3870fede2285fec52a7478d746e4d46f1b4ea88e37f8 namespace=k8s.io Oct 8 20:02:54.194188 containerd[1475]: time="2024-10-08T20:02:54.194179382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:54.909087 kubelet[2634]: E1008 20:02:54.909047 2634 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:02:55.111708 kubelet[2634]: E1008 20:02:55.111680 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:55.113479 containerd[1475]: time="2024-10-08T20:02:55.113434509Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:02:55.504422 containerd[1475]: time="2024-10-08T20:02:55.504362221Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952\"" Oct 8 20:02:55.505123 containerd[1475]: time="2024-10-08T20:02:55.504985679Z" level=info msg="StartContainer for \"42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952\"" Oct 8 20:02:55.533025 systemd[1]: Started cri-containerd-42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952.scope - libcontainer container 42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952. Oct 8 20:02:55.563554 systemd[1]: cri-containerd-42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952.scope: Deactivated successfully. Oct 8 20:02:55.667899 containerd[1475]: time="2024-10-08T20:02:55.667829666Z" level=info msg="StartContainer for \"42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952\" returns successfully" Oct 8 20:02:55.817807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952-rootfs.mount: Deactivated successfully. Oct 8 20:02:55.852422 containerd[1475]: time="2024-10-08T20:02:55.852349850Z" level=info msg="shim disconnected" id=42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952 namespace=k8s.io Oct 8 20:02:55.852422 containerd[1475]: time="2024-10-08T20:02:55.852395385Z" level=warning msg="cleaning up after shim disconnected" id=42ac2bd29aa42ae51099dbabc63151eb02c234303b447fb43cb81fb2f5d53952 namespace=k8s.io Oct 8 20:02:55.852422 containerd[1475]: time="2024-10-08T20:02:55.852403291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:56.114294 kubelet[2634]: E1008 20:02:56.114188 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:56.116535 containerd[1475]: time="2024-10-08T20:02:56.116208169Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:02:56.433932 containerd[1475]: time="2024-10-08T20:02:56.433754553Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073\"" Oct 8 20:02:56.434584 containerd[1475]: time="2024-10-08T20:02:56.434458923Z" level=info msg="StartContainer for \"4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073\"" Oct 8 20:02:56.464064 systemd[1]: Started cri-containerd-4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073.scope - libcontainer container 4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073. Oct 8 20:02:56.526138 systemd[1]: cri-containerd-4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073.scope: Deactivated successfully. Oct 8 20:02:56.562030 containerd[1475]: time="2024-10-08T20:02:56.561978408Z" level=info msg="StartContainer for \"4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073\" returns successfully" Oct 8 20:02:56.734906 containerd[1475]: time="2024-10-08T20:02:56.734731469Z" level=info msg="shim disconnected" id=4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073 namespace=k8s.io Oct 8 20:02:56.734906 containerd[1475]: time="2024-10-08T20:02:56.734791683Z" level=warning msg="cleaning up after shim disconnected" id=4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073 namespace=k8s.io Oct 8 20:02:56.734906 containerd[1475]: time="2024-10-08T20:02:56.734803795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:56.817830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a2df49b844345ed0acb4dc975f780a42da2092c4540a0573809a853f555f073-rootfs.mount: Deactivated successfully. Oct 8 20:02:57.118185 kubelet[2634]: E1008 20:02:57.118153 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:57.120710 containerd[1475]: time="2024-10-08T20:02:57.120653689Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:02:57.212255 containerd[1475]: time="2024-10-08T20:02:57.212200386Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2\"" Oct 8 20:02:57.212750 containerd[1475]: time="2024-10-08T20:02:57.212717782Z" level=info msg="StartContainer for \"9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2\"" Oct 8 20:02:57.252090 systemd[1]: Started cri-containerd-9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2.scope - libcontainer container 9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2. Oct 8 20:02:57.276252 systemd[1]: cri-containerd-9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2.scope: Deactivated successfully. Oct 8 20:02:57.279832 containerd[1475]: time="2024-10-08T20:02:57.279796382Z" level=info msg="StartContainer for \"9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2\" returns successfully" Oct 8 20:02:57.303590 containerd[1475]: time="2024-10-08T20:02:57.303522819Z" level=info msg="shim disconnected" id=9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2 namespace=k8s.io Oct 8 20:02:57.303590 containerd[1475]: time="2024-10-08T20:02:57.303585888Z" level=warning msg="cleaning up after shim disconnected" id=9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2 namespace=k8s.io Oct 8 20:02:57.303590 containerd[1475]: time="2024-10-08T20:02:57.303595667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:02:57.817859 systemd[1]: run-containerd-runc-k8s.io-9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2-runc.eqWeM3.mount: Deactivated successfully. Oct 8 20:02:57.817987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d2f8427579e4d72848bb08c29dd05ca1c1be4306bb6030905ead9f8eefb68e2-rootfs.mount: Deactivated successfully. Oct 8 20:02:58.122718 kubelet[2634]: E1008 20:02:58.122558 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:58.124529 containerd[1475]: time="2024-10-08T20:02:58.124488175Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:02:58.175862 containerd[1475]: time="2024-10-08T20:02:58.175784285Z" level=info msg="CreateContainer within sandbox \"13035ff5bfa3a36bb9110cbfbd6dccd3b16107c9f2439ee6dda51c0c55ad65b1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a\"" Oct 8 20:02:58.176592 containerd[1475]: time="2024-10-08T20:02:58.176526276Z" level=info msg="StartContainer for \"5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a\"" Oct 8 20:02:58.210147 systemd[1]: Started cri-containerd-5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a.scope - libcontainer container 5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a. Oct 8 20:02:58.243125 containerd[1475]: time="2024-10-08T20:02:58.242592730Z" level=info msg="StartContainer for \"5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a\" returns successfully" Oct 8 20:02:58.703910 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 8 20:02:59.126476 kubelet[2634]: E1008 20:02:59.126434 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:02:59.141175 kubelet[2634]: I1008 20:02:59.141129 2634 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9gc8z" podStartSLOduration=6.141085961 podStartE2EDuration="6.141085961s" podCreationTimestamp="2024-10-08 20:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:02:59.140679294 +0000 UTC m=+109.394286902" watchObservedRunningTime="2024-10-08 20:02:59.141085961 +0000 UTC m=+109.394693559" Oct 8 20:03:00.007510 systemd[1]: run-containerd-runc-k8s.io-5c9a21f6b2d96b754854e0dacc7cf65ef1e0b8f2d0e3adfebe611f3a22ce836a-runc.tuwwS8.mount: Deactivated successfully. Oct 8 20:03:00.128809 kubelet[2634]: E1008 20:03:00.128778 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:01.131086 kubelet[2634]: E1008 20:03:01.131030 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:01.789665 systemd-networkd[1402]: lxc_health: Link UP Oct 8 20:03:01.801144 systemd-networkd[1402]: lxc_health: Gained carrier Oct 8 20:03:02.133307 kubelet[2634]: E1008 20:03:02.133153 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:02.928130 systemd-networkd[1402]: lxc_health: Gained IPv6LL Oct 8 20:03:03.135457 kubelet[2634]: E1008 20:03:03.135247 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:04.137258 kubelet[2634]: E1008 20:03:04.137226 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:08.525378 sshd[4508]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:08.530265 systemd[1]: sshd@31-10.0.0.96:22-10.0.0.1:43072.service: Deactivated successfully. Oct 8 20:03:08.532432 systemd[1]: session-32.scope: Deactivated successfully. Oct 8 20:03:08.533182 systemd-logind[1453]: Session 32 logged out. Waiting for processes to exit. Oct 8 20:03:08.534091 systemd-logind[1453]: Removed session 32. Oct 8 20:03:09.864929 containerd[1475]: time="2024-10-08T20:03:09.864857329Z" level=info msg="StopPodSandbox for \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\"" Oct 8 20:03:09.865343 containerd[1475]: time="2024-10-08T20:03:09.864980842Z" level=info msg="TearDown network for sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" successfully" Oct 8 20:03:09.865343 containerd[1475]: time="2024-10-08T20:03:09.864992434Z" level=info msg="StopPodSandbox for \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" returns successfully" Oct 8 20:03:09.865722 containerd[1475]: time="2024-10-08T20:03:09.865683917Z" level=info msg="RemovePodSandbox for \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\"" Oct 8 20:03:09.865820 containerd[1475]: time="2024-10-08T20:03:09.865730225Z" level=info msg="Forcibly stopping sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\"" Oct 8 20:03:09.865820 containerd[1475]: time="2024-10-08T20:03:09.865790468Z" level=info msg="TearDown network for sandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" successfully" Oct 8 20:03:09.975980 containerd[1475]: time="2024-10-08T20:03:09.975908291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:09.976144 containerd[1475]: time="2024-10-08T20:03:09.976002829Z" level=info msg="RemovePodSandbox \"45b47f55f2b25202dc1214c60fbab7c0cb1112c538e06a48760ef88805ef629d\" returns successfully" Oct 8 20:03:09.976553 containerd[1475]: time="2024-10-08T20:03:09.976526948Z" level=info msg="StopPodSandbox for \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\"" Oct 8 20:03:09.976646 containerd[1475]: time="2024-10-08T20:03:09.976619813Z" level=info msg="TearDown network for sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" successfully" Oct 8 20:03:09.976646 containerd[1475]: time="2024-10-08T20:03:09.976639309Z" level=info msg="StopPodSandbox for \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" returns successfully" Oct 8 20:03:09.976991 containerd[1475]: time="2024-10-08T20:03:09.976964363Z" level=info msg="RemovePodSandbox for \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\"" Oct 8 20:03:09.977066 containerd[1475]: time="2024-10-08T20:03:09.976991173Z" level=info msg="Forcibly stopping sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\"" Oct 8 20:03:09.977066 containerd[1475]: time="2024-10-08T20:03:09.977047218Z" level=info msg="TearDown network for sandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" successfully" Oct 8 20:03:10.088017 containerd[1475]: time="2024-10-08T20:03:10.087961250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:03:10.088017 containerd[1475]: time="2024-10-08T20:03:10.088018788Z" level=info msg="RemovePodSandbox \"4c173d2310cef0516e77aafc377868fe0e6c2fcc23e508ec69828158a8542510\" returns successfully"